Tuesday, November 17, 2020

ATTO Technology Unveils ATTO SiliconDisk RAM-Based Ethernet Fabric-Connected Storage Appliance

 100X faster than flash, SiliconDisk represents a new tier in the storage pyramid

ATTO Technology, Inc., has unveiled ATTO SiliconDisk, a new, no-compromise storage solution that combines cutting-edge speed and performance with the flexibility and sharing capabilities of Ethernet connectivity.

ATTO SiliconDisk is a scalable, state-of-the-art RAM-based storage appliance that is 100 times faster than flash-based storage solutions. Designed to be quickly set up for access by multiple servers, SiliconDisk far exceeds current SSD solutions for performance and extensibility with under 600 nanoseconds of latency, four 100Gb Ethernet ports and 25GB/s of sustained throughput. Best of all, SiliconDisk requires no special software, no application changes and no re-architecting of data centers, just plug and play.

System architects recognize that there’s a storage performance gap even with flash technology, which is fast but not enough to overcome the challenges modern data loads impose. ATTO SiliconDisk represents a new tier in the storage pyramid, addressing the gap between RAM and traditional storage.

“SiliconDisk is similar in concept to the very first SCSI product ATTO released as a start-up 32 years ago,” said Timothy J. Klein, president and CEO, ATTO Technology. “Of course, this one is far better, faster and this time it’s shareable. We are quite excited about this new product line.”

By bridging that gap SiliconDisk becomes essential for next-generation data processing:

  • increases performance predictability under peak loads
  • provides guaranteed ultra-low latency for critical applications
  • RAM-based persistent storage boosts processing and performance capabilities

Data is instantly stored and retrieved making the SiliconDisk ideal for accelerating real-time data analytics. Artificial intelligence and machine learning, financial trading and medical imaging applications can capture and analyze data instances 100x faster than before. Every node on a high-availability shared fabric can have access to RAM-level storage data.

SiliconDisk is the first and only 1U DRAM-based solid-state storage appliance with multiple 100GbE port connectivity. The four channels of ultra-fast 100GbE are integrated into a single chip and linked to high-speed RAM, all managed by ATTO xCORE storage controller technology to eliminate all bottlenecks in performance.

Completing the package of exclusive, built-in technologies are ATTO RToptimizer™ and ATTO Infinite Write Endurance™. RToptimizer delivers real-time performance analytics of storage network connections, storage utilization, as well as overall SiliconDisk data performance for quick and accurate solution optimization. With Infinite Write Endurance, RAM used in SiliconDisk has no “per write” flash performance penalties or worry of memory wear-out.


Tuesday, November 10, 2020

How Spetra Logic Overcame a Ransomware Attack

If you’re in the data storage business, the last thing you want to do is “announce” a ransomware attack. But that’s actually counterintuitive. We were hit with ransomware, and as a data storage business, we feel it is important to share the story in order to help others prepare.  

As Senior Director of IT, I’m not usually involved in the small, day-to-day glitches that occur. I’ve got a great team of IT professionals, and they rarely have to involve me in such normal operations. Thursday morning, May 7, 2020, would prove to be anything but “normal operations.”On that day, at roughly 9:20 a.m., I had two members of my staff report that lots of little things (none of which were related) were breaking. This was not normal. When a third staff member came in to report the same, there was a pause and then a scramble. We all thought the worst – have we been breached? We’re now in the hall and running to the data center.  

One of my guys jumped on a server to see if we could figure out what was happening. Searching the files, he found a ransom note. Our hearts dropped, but our feet hit the floor. We ran around physically cutting the cord between servers so they couldn’t communicate with other servers to spread the virus further. Then we brought them all down.It was now 10:45 a.m. and we heard one of the strangest sounds we had ever heard -- complete silence. Our data center hadn’t been silent since we moved into it in 2012. We’d planned for this situation for years, but it was still uncharted waters for all of us. Once we got the machines down, we had a minute to breathe. The ransom note informed us that we had been hit by the “Netwalker” virus. With a rough calculation, I figured the ransom would be around $3.6 million, and we had five days to pay it off in bitcoin or we were on our own. 

I had one person checking on backups to figure out exactly what we had for a disaster recovery scenario. We rely on CommVault for our daily backups to both Spectra tape and Spectra BlackPearl® NAS. In addition, we augment those backups with VM snapshots and StorCycle® software for data migration.  

 We realized that our email server was not compromised. At least we could still communicate with the rest of the company and the rest of the world. We put as much protection around it as possible and brought it back up. 

By noon, I emailed the appropriate individuals and groups throughout company. I also contacted the FBI, explained the situation, and they promised that someone from their cybersecurity team would call me back.  

I assigned a group to start bringing up department servers completely disconnected from the network. We realized that if a server had 100TB of data, it now showed up as a single 100TB encrypted file. Out of a total of 600 servers, including virtual machines, we had 150 servers that were compromised.  

By 2 p.m. we confirmed a few things that helped us take heart. We had tape backups from the previous Friday. Our total possible data loss would be three working days – not what you want to have happen, but we would be within my SLA. Tape would get us back to the beginning of the week, but we’re a transactional company; every minute represents thousands of transactions. We run disk snapshots of our Nimble Flash arrays daily. We confirmed that we had uncompromised disk images for about 90 percentof our systems.  

Our legal department informed me that they’d bought “ransomware insurance” a few months earlier -- something the IT team was not initially aware of. It was a stroke of luck. By roughly 4 p.m., our insurance company set us up with a security consulting firm that deals with these issues. They told us exactly what to do to “stop the bleeding.” I put my team on split shifts around the clock. They were either working or getting a few hours of sleep.  

Around 7:30 p.m., we were on a call with the FBI cybersecurity team. They’ve dealt with this a lot and said our only options were to negotiate with the threat actors or rebuild our data center from scratch. We were roughly 10 hours into the ordeal, and it already felt like it had been days. 

 This attack started on a Thursday morning. By the wee hours of Monday morning, we’d stopped the bleeding and did a full triage to assess our options. We had roughly 24 hours before we had to pay the ransom or lose that option altogether. Keep in mind, at this point, we hadn’t recovered a single file. It had taken that long just to secure all servers and ensure that we had stopped the virus from spreading.  

We had a meeting with all involved players: The FBI, our security team, our legal department, and my entire staff. I told them we weren’t going to pay the ransom. In actuality, I never considered it an option. Once we confirmed that we had a backup on tape, we had the confidence to walk away from any options involving paying or negotiating the ransom.  

We knew what our recovery effort would be. This wasn’t just about restoring some files; this was a full-on disaster recovery operation, including complete wipes and rebuilds of every server. Based on feedback we got, it was estimated that it would take four to six weeks for us to get back up and running. With that in mind, we started rebuilding. It took five days to get the company back up; it took another week or so to get all of our systems back online; and it took another two weeks after that to get all of the kinks in connections worked out.


So, how did this happen? In late March, roughly a month before the attack, we’d sent hundreds of employees home to work remotely due to COVID-19. We went from a 99 percent on-premise work environment to a 99 percent remote-office environment -- overnight. Cybercriminals were all too aware of what was happening in the world, and they exploited it. In fact, I’ve come across some scary facts since this event. According to VMware Carbon Black, ransomware attacks have increased by 900 percent this year. And the security firm McAfee recently reported that Netwalker cybercriminals have made more than $25 million in ransom payments since March. 

One of our employees was VPN’d in on a private laptop. They opened a piece of malware that would have been stopped by our virus protection software, Sophos, but Sophos wasn’t installed on that system. It never would have been allowed pre-COVID, but we were acting quickly to respond to the pandemic, and unfortunately, one risk averted was another risk created.  

Lessons Learned 

Data First – Have multiple copies of data on multiple mediums in multiple locations. The best IT experts in the world can’t help you get your data back if every copy is compromised. We never could have taken the bold step we did if we had not had those tape copies. Our data had been encrypted by the virus as fast as disks could carry it. You have to have a copy of data that can’t be touched. Tape provided an air-gap, an electronically disconnected copy of data that could not be accessed. 

Even if you’re willing to pay a ransom, encryption-by-ransomware is messy. There’s no guarantee you’ll get the decryption tool if you pay the ransom, and there’s no guarantee it will work. We relied on both tape backups and disk snapshots to restore our systems. We are now exploring ways to replicate our disk snapshots to a dark site. The decentralization of data can create management challenges, but we’re exploring some pretty promising ways to centrally manage it

 Experts Second – Have cybersecurity experts onboard or close at hand. Not all companies are large enough to justify a full-time cybersecurity team. Four years prior to this, we had had three test servers that were not protected by our VPN become infected. Our production LAN was protected by the firewall. The servers were easily cleaned and restored from backups. That experience was useful, however. A few of the protections we put in place then helped us during this event. We actually do an amazing job of security here, but we don’t deal with the aftermath of an attack on a daily basis the way cybersecurity experts do. Having ransomware insurance was a brilliant way to have a cybersecurity team there in an instant. You’re never 100 percent safe. These attacks happen to organizations of every size and level of expertise, from world governments to the biggest names in industry. You don’t want to go this alone. The cybersecurity team we had access to was able to help decrease the amount of downtime we experienced as well as take other actions such as verify that no data had actually been stolen or accessed. This avoided a nightmare that many organizations are strapped with for years afterward.  

Balance Third– Good IT security is a balance of culture and security strategy. No matter what the level of security you deploy, you could always add more. At some point, however, that will start to impact your user experience and possibly the goals of the company which are accomplished through IT. Will you allow the use of Macs andPCs? Will you allow remote access, or require everyone to work on premise? How much will your virus protection software filter? Will vital communication be blocked due to extreme protection? It’s all a balance. You may think you’re willing to accept a risk that you really aren’t willing to accept once it hits. Figure this out ahead of time. Consult with security experts to develop a strategy that balances risk and IT policy.  

In the end, we overcame the attack with virtually no data loss and absolutely no data stolen. One of our servers was not being backed up appropriately. Data for that server had to be reconstructed. It was time consuming and costly.  

As difficult as it was, this is what success looks like after such an attack: Assess your infrastructure, your access to experts and your approach to IT security. There’s no lock that can’t be broken into, but by taking this approach, you will be able to minimize the damage and assure business continuance.

Wednesday, July 15, 2020

Tape Storage Could Soon Crush Hard Drives With 400TB Capacities

If you’ve got a mountain of digital data to store but don’t necessarily need to access it every day, tape cartridges are the way to go. Twelve terabytes of storage will set you back about $100 these days, but in the coming years, Fujifilm believes it can push the technology to 400TB capacities in a single cartridge. 

The medium lives on as an affordable and reliable way to store terabytes of data.  For big corporations who need to hold onto data for decades, the savings do add up. 

At the moment, Linear Tape-Open, or LTO, is the primary type of magnetic tape storage used in data centers around the world. As it improves, newer generations are simply given a name with a higher number, with the LTO-8 standard being the current format in use which allows for 12TB of data to be stored on a cartridge, or up to 30TB with the data being compressed. The new LTO-9 format is expected to be available by year end or early 2021, which takes native storage capacities to 18TB per cart, but Fujifilm is looking much farther down the road than that. 

Data cartridges currently in use are made from a material called Barium Ferrite (BaFe) and every new generation uses smaller and smaller particles to cram more data onto a thin strip of tape. There’s a limit, though, when the particles get too small to be accurately read, so, as reported by Blocks & Files, Fujifilm is looking to switch to a new material called Strontium Ferrite (SrFe), a smaller molecule than BaFe which will allow for increased densities and more storage capacity per tape. 

But don’t toss those LTO-8 tapes just yet. On average it takes about two-and-a-half years between data tape generations for the improved technology to reach the market. It’s estimated that Fujifilm’s proposed 400TB cartridges would be part of the LTO-13 spec (we’re only just getting LTO-9 this year, remember) and so won’t be an actual product until around 2030 at the earliest. The technology sounds promising, but given how 2020 has been going, 2030 feels more like a century away than just a decade.

Monday, June 22, 2020

Comparison of Arcserve UDP and Veritas NetBackup

Anyone involved with IT in a midsize enterprise in any meaningful capacity wants to make the best choice when it comes to data protection.

These individuals intuitively understand the complexities associated with supporting this environment. Inside the data center one finds multiple types of software. Outside the data center, one encounters multiple types of clouds, edge devices and PCs.

On top of these variables, enterprise IT must also account for next gen requirements that span the gamut. All these variables contribute to an enterprise needing a sophisticated solution that protects its data and equips it for future requirements.

Arcserve UDP and Veritas NetBackup
Midsize enterprises frequently consider Arcserve Unified Data Protection (UDP) and Veritas NetBackup as solutions to protect their environments. Both products target enterprises though they approach them differently.

Arcserve specifically targets and develops its UDP software for midsize enterprises with 250-5,000 employees.

In contrast, a recent analysis of over 2,000 companies that use NetBackup found these companies range in employee size. 38% were midsize (200-5,000 employees) and 30% were large enterprises (5,000+ employees.) This analysis illustrates NetBackup provides the breadth of features that enterprises need.

The larger question becomes whether a midsize enterprise can easily and cost effectively utilize NetBackup’s features. That’s where the importance of selecting a solution optimized for a midsize enterprise emerges.

Baseline Requirements for Midsize Enterprises
To make the best choice, a midsize enterprise must first verify any solution under consideration meets its baseline data protection and recovery requirements.

These break down into 5 general areas:
• OSs (OSes)
• Applications
• Hypervisors
• Cloud
• Next-gen requirements

Both UDP and NetBackup deliver on these features core to midsize enterprise requirements. Each solution supports the primary applications, databases, OSs, and hypervisors that a midsize enterprise commonly uses.

Arcserve’s focus on delivering a solution optimized for a midsize enterprise shows up in a few ways. The company brings its own cloud offering to the table. Using its all-in-one DR service, the firm can host backup data and perform recoveries in its cloud on behalf of the midsize enterprise. The vendor also protects Microsoft Office 365 data whereas Veritas offers a separate SaaS backup offering to protect Office 365 data.

Endpoint Data and Anti-Ransomware Protection
Endpoint data and anti-ransomware protection frequently factor in when making the best choice among data protection solutions. Both Arcserve and Veritas share some traits in common in how their respective solutions help a midsize enterprise protect vs. ransomware. The two firms do, however, currently differ in their respective strategies for detecting and preventing ransomware on endpoint devices.

Veritas suggests that a midsize enterprise separately deploy Carbonite EndPoint to secure its endpoint devices from ransomware.

In contrast, Arcserve offers its own backup agents to backup endpoint devices. It also partners with Sophos to detect and protect Windows, Linux, and Unix clients from ransomware.

Installation and Management
Both firms offer integrated backup appliances to accelerate and simplify deployment and management of their respective solutions. Both these providers each offer a range of appliances to meet the needs of different size midsize enterprises.

To configure NetBackup, best practices call for deploying a minimum of 2 servers: a NetBackup master server and one or more media servers. A midsize enterprise should find UDP offers a more straightforward, lower cost software configuration and licensing framework. An enterprise may only need to acquire a single Arcserve server that can host all necessary UDP software and associated license keys.

Backup Solution Costs
Often the cost of the software becomes the deciding factor when making the best choice. Both vendors offer software licensing options that may appeal to a midsize enterprise. Among NetBackup’s additional software licensing options, 2 stand out. Its Starter Pack provides data protection for 5, 20, or 40 clients and starts at about $3,250. While more affordable, a midsize enterprise cannot upgrade this version to NetBackup Enterprise Server.

UDP breaks its licensing out in distinct ways that a midsize enterprise may more easily grasp and implement. A midsize enterprise may choose from four different editions of UDP with pricing starting at under $700. Once a midsize enterprise identifies the version that best matches it requirements, it selects the most optimal software licensing option for its environment.

UDP Tuned to Midsize Enterprise Needs
Both UDP and NetBackup provide the range of features that merit consideration by a midsize enterprise. However, when it comes to making the best choice for data protection in midsize enterprises, Arcserve specifically tunes UDP to match these requirements.

It already natively supports Office 365 and endpoint data protection and delivers all-in-one disaster recoveries both on-premises and in the cloud. Further, it delivers these features using a robust, yet simple to understand software version and licensing model.