Friday, October 04, 2024

Informaton Storage Industry Conclusions and Recommendations on Tape

Tape has several attributes which favor it over disk for long-term archival use, including its considerably lower cost (both acquisition and operating costs) and its higher data reliability. An important contributing factor to tape’s lower cost is its significantly lower power consumption compared to disk. Energy saving in the data center environment has taken on an importance of its own, beyond strictly cost implications, because data centers are now concerned about the availability of sufficient power for their future operations. 

For the above reasons and more, the use of tape continues to be an important element in most enterprise IT operations. The most common approach for enterprise storage today is not to deliberate whether to use flash or disk or tape, but instead how to best use flash, disk, and tape in an integrated approach for both data protection and archive, taking advantage of what each storage technology offers. Software that presents a standard interface, such as NFS, CIFS, or OpenStack Swift, has enabled tiered storage solutions that are transparent to the applications. There are several industry segments where archival storage on tape has become a strategic part of the operation, of which several examples have been detailed in this report. 

A further ongoing development in the IT environment is the growth of public cloud storage, which continues to accelerate. Cloud storage services are offered with a variety of business models and pricing schemes. Many of these services are highly cost sensitive and/ or critically dependent on maintaining security and integrity of the stored data. Tape’s fundamental attributes, especially low cost, high data reliability, and transparent encryption, make it ideal for its use in cloud storage offerings. 

We expect that tape will continue to play a strong role in future data storage environments; however, that will require ongoing attention to advancing tape technology and its ease of use. First and foremost, the research and development to support aggressive tape density and capacity advances must continue, for tape to maintain its significant cost advantage over disk. Please refer to section 2 for details on the 10-year roadmap, which shows robust improvements in capacity resulting in lower TCO to maintain that ost advantage. To support increased use of tape for archival storage, advances in tape data organization and associated supporting software, that enable easy access to specific data records in the archive, must continue to be developed. Continuation of software products that migrate data to new generations of hardware have benefited the adoption and use of tape and they must continue. The development of tape drive and media products that extend backward compatibility to two or three generation have also helped reduce the need for frequent migrations and helped tape’s adoption for large digital archives and must continue going forward. Finally, although the outlook for tape presented here is very positive, the industry should strive to better educate customers and better publicize the inherent and significant advantages of tape storage.

Thursday, October 03, 2024

Why the Future of Data Storage is (Still) Magnetic Tape

Disk drives are reaching their limits, but magnetic tape just gets better and better

 It should come as no surprise that recent advances in big-data analytics and artificial intelligence have created strong incentives for enterprises to amass information about every measurable aspect of their businesses. And financial regulations now require organizations to keep records for much longer periods than they had to in the past. So companies and institutions of all stripes are holding onto more and more.

Studies show that the amount of data being recorded is increasing at 30 to 40 percent per year. At the same time, the capacity of modern hard drives, which are used to store most of this, is increasing at less than half that rate. Fortunately, much of this information doesn’t need to be accessed instantly. And for such things, magnetic tape is the perfect solution.

Seriously? Tape? The very idea may evoke images of reels rotating fitfully next to a bulky mainframe in an old movie like Desk Set or Dr. Strangelove. So, a quick reality check: Tape has never gone away!

Indeed, much of the world’s data is still kept on tape, including data for basic science, such as particle physics and radio astronomy, human heritage and national archives, major motion pictures, banking, insurance, oil exploration, and more. There is even a cadre of people (including me, trained in materials science, engineering, or physics) whose job it is to keep improving tape storage.

Tape has been around for a long while, yes, but the technology hasn’t been frozen in time. Quite the contrary. Like the hard disk and the transistor, magnetic tape has advanced enormously over the decades.

The first commercial digital-tape storage system, IBM’s Model 726, could store about 1.1 megabytes on one reel of tape. Today, a modern tape cartridge can hold 15 terabytes. And a single robotic tape library can contain up to 278 petabytes of data. Storing that much data on compact discs would require more than 397 million of them, which if stacked would form a tower more than 476 kilometers high.

It’s true that tape doesn’t offer the fast access speeds of hard disks or semiconductor memories. Still, the medium’s advantages are many. To begin with, tape storage is more energy efficient: Once all the data has been recorded, a tape cartridge simply sits quietly in a slot in a robotic library and doesn’t consume any power at all. Tape is also exceedingly reliable, with error rates that are four to five orders of magnitude lower than those of hard drives. And tape is very secure, with built-in, on-the-fly encryption and additional security provided by the nature of the medium itself. After all, if a cartridge isn’t mounted in a drive, the data cannot be accessed or modified. This “air gap” is particularly attractive in light of the growing rate of data theft through cyberattacks.

The offline nature of tape also provides an additional line of defense against buggy software. For example, in 2011, a flaw in a software update caused Google to accidentally delete the saved email messages in about 40,000 Gmail accounts. That loss occurred despite there being several copies of the data stored on hard drives across multiple data centers. Fortunately, the data was also recorded on tape, and Google could eventually restore all the lost data from that backup.

The 2011 Gmail incident was one of the first disclosures that a cloud-service provider was using tape for its operations. More recently, Microsoft let it be known that its Azure Archive Storage uses IBM tape storage equipment.

 All these pluses notwithstanding, the main reason why companies use tape is usually simple economics. Tape storage costs one-sixth the amount you’d have to pay to keep the same amount of data on disks, which is why you find tape systems almost anyplace where massive amounts of data are being stored. But because tape has now disappeared completely from consumer-level products, most people are unaware of its existence, let alone of the tremendous advances that tape recording technology has made in recent years and will continue to make for the foreseeable future.

All this is to say that tape has been with us for decades and will be here for decades to come. How can I be so sure? Read on.

Tape has survived for as long as it has for one fundamental reason: It’s cheap. And it’s getting cheaper all the time. But will that always be the case?

You might expect that if the ability to cram ever more data onto magnetic disks is diminishing, so too must this be true for tape, which uses the same basic technology but is even older. The surprising reality is that for tape, this scaling up in capacity is showing no signs of slowing. Indeed, it should continue for many more years at its historical rate of about 33 percent per year, meaning that you can expect a doubling in capacity roughly every two to three years. Think of it as a Moore’s Law for magnetic tape.

That’s great news for anyone who has to deal with the explosion in data on a storage budget that remains flat. To understand why tape still has so much potential relative to hard drives, consider the way tape and hard drives evolved.

Both rely on the same basic physical mechanisms to store digital data. They do so in the form of narrow tracks in a thin film of magnetic material in which the magnetism switches between two states of polarity. The information is encoded as a series of bits, represented by the presence or absence of a magnetic-polarity transition at specific points along a track. Since the introduction of tape and hard drives in the 1950s, the manufacturers of both have been driven by the mantra “denser, faster, cheaper.” As a result, the cost of both, in terms of dollars per gigabyte of capacity, has fallen by many orders of magnitude.

These cost reductions are the result of exponential increases in the density of information that can be recorded on each square millimeter of the magnetic substrate. That areal density is the product of the recording density along the data tracks and the density of those tracks in the perpendicular direction.

Early on, the areal densities of tapes and hard drives were similar. But the much greater market size and revenue from the sale of hard drives provided funding for a much larger R&D effort, which enabled their makers to scale up more aggressively. As a result, the current areal density of high-capacity hard drives is about 100 times that of the most recent tape drives.

Nevertheless, because they have a much larger surface area available for recording, state-of-the-art tape systems provide a native cartridge capacity of up to 15 TB—greater than the highest-capacity hard drives on the market. That’s true even though both kinds of equipment take up about the same amount of space.

 With the exception of capacity, the performance characteristics of tape and hard drives are, of course, very different. The long length of the tape held in a cartridge—normally hundreds of meters—results in average data-access times of 50 to 60 seconds compared with just 5 to 10 milliseconds for hard drives. But the rate at which data can be written to tape is, surprisingly enough, more than twice the rate of writing to disk.

Over the past few years, the areal density scaling of data on hard disks has slowed from its historical average of around 40 percent a year to between 10 and 15 percent. The reason has to do with some fundamental physics: To record more data in a given area, you need to allot a smaller region to each bit. That in turn reduces the signal you can get when you read it. And if you reduce the signal too much, it gets lost in the noise that arises from the granular nature of the magnetic grains coating the disk.

It’s possible to reduce that background noise by making those grains smaller. But it’s difficult to shrink the magnetic grains beyond a certain size without compromising their ability to maintain a magnetic state in a stable way. The smallest size that’s practical to use for magnetic recording is known in this business as the superparamagnetic limit. And disk manufacturers have reached it.

Until recently, this slowdown was not obvious to consumers, because disk-drive manufacturers were able to compensate by adding more heads and platters to each unit, enabling a higher capacity in the same size package. But now both the available space and the cost of adding more heads and platters are limiting the gains that drive manufacturers can make, and the plateau is starting to become apparent.

There are a few technologies under development that could enable hard-drive scaling beyond today’s superparamagnetic limit. These include heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR), techniques that enable the use of smaller grains and hence allow smaller regions of the disk to be magnetized. But these approaches add cost and introduce vexing engineering challenges. And even if they are successful, the scaling they provide is, according to manufacturers, likely to remain limited. Western Digital Corp., for example, which recently announced that it will probably begin shipping MAMR hard drives in 2019, expects that this technology will enable areal density scaling of only about 15 percent per year.

In contrast, tape storage equipment currently operates at areal densities that are well below the superparamagnetic limit. So tape’s Moore’s Law can go on for a decade or more without running into such roadblocks from fundamental physics.

Still, tape is a tricky technology. Its removable nature, the use of a thin polymer substrate rather than a rigid disk, and the simultaneous recording of up to 32 tracks in parallel create significant hurdles for designers. That’s why my research team at the IBM Research–Zurich lab has been working hard to find ways to enable the continued scaling of tape, either by adapting hard-drive technologies or by inventing completely new approaches.

In 2015, we and our collaborators at FujiFilm Corp. showed that by using ultrasmall barium ferrite particles oriented perpendicular to the tape, it’s possible to record data at more than 12 times the density achievable with today’s commercial technology. And more recently, in collaboration with Sony Storage Media Solutions, we demonstrated the possibility of recording data at an areal density that is about 20 times the current figure for state-of-the-art tape drives. To put this in perspective, if this technology were to be commercialized, a movie studio, which now might need a dozen tape cartridges to archive all the digital components of a big-budget feature, would be able to fit all of them on a single tape.

To enable this degree of scaling, we had to make a bunch of technical advances. For one, we improved the ability of the read and write heads to follow the slender tracks on the tape, which were just 100 or so nanometers wide in our latest demo.

We also had to reduce the width of the data reader—a magnetoresistive sensor used to read back the recorded data tracks—from its current micro­meter size to less than 50 nm. As a result, the signal we could pick up with such a tiny reader got very noisy. We compensated by increasing the signal-to-noise ratio inherent to the media, which is a function of the size and orientation of the magnetic particles as well as their composition and the smoothness and slickness of the tape surface. To help further, we improved the signal processing and error-correction schemes our equipment employed.

To ensure that our new prototype media can retain recorded data for decades, we changed the nature of the magnetic particles in the recording layer, making them more stable. But that change made it harder to record the data in the first place, to the extent that a normal tape transducer could not reliably write to the new media. So we used a special write head that produces magnetic fields much stronger than a conventional head could provide.

Combining these technologies, we were able to read and write data in our laboratory system at a linear density of 818,000 bits per inch. (For historical reasons, tape engineers around the world measure data density in inches.) In combination with the 246,200 tracks per inch that the new technology can handle, our prototype unit achieved an areal density of 201 gigabits per square inch. Assuming that one cartridge can hold 1,140 meters of tape—a reasonable assumption, based on the reduced thickness of the new tape media we used—this areal density corresponds to a cartridge capacity of a whopping 330 TB. That means that a single tape cartridge could record as much data as a wheelbarrow full of hard drives.

In 2015,the Information Storage Industry Consortium, an organization that includes HP Enterprise, IBM, Oracle, and Quantum, along with a slew of academic research groups, released what it called the “International Magnetic Tape Storage Roadmap.” That forecast predicted that the areal density of tape storage would reach 91 Gb per square inch by 2025. Extrapolating the trend suggests that it will surpass 200 Gb per square inch by 2028.

The authors of that road map each had an interest in the future of tape storage. But you needn’t worry that they were being too optimistic. The laboratory experiments that my colleagues and I have recently carried out demonstrate that 200 Gb per square inch is perfectly possible. So the feasibility of keeping tape on the growth path it’s had for at least another decade is, to my mind, well assured.

Indeed, tape may be one of the last information technologies to follow a Moore’s Law–like scaling, maintaining that for the next decade, if not beyond. And that streak in turn will only increase the cost advantage of tape over hard drives and other storage technologies. So even though you may rarely see it outside of a black-and-white movie, magnetic tape, old as it is, will be here for years to come.

Thursday, May 04, 2023

Announcing Myriad All-Flash File and Object Storage Software

We’re excited to announce Myriad an all-flash file and object storage platform based on a modern, cloud-native software architecture that avoids the limitations of legacy NAS storage systems. It brings new levels of simplicity and adaptability to your high-performance workloads without the constraints of specialized hardware.


Key highlights include:

  • Scale-out software architecture delivers consistent, low-latency performance for high-bandwidth and high-IOPs applications.
  • A cloud-native microservices architecture orchestrated by Kubernetes provides a resilient, “always on” architecture, and delivers new features and fixes rapidly with less risk.
  • Automated storage management allows a cluster to be scaled or modified without user intervention and the need for advanced IT skills.
  • Self-healing, self-balancing software automatically rebuilds data in the background while also rebalancing data as storage clusters expand, shrink, and change.
  • Inline data deduplication and compression reduce the cost of flash storage and improve data efficiencies relative to legacy storage platforms.
  • Simplified data protection and recovery through built-in snapshots, clones, snapshot recovery tools, and rollback capabilities.

Monday, March 06, 2023

AI Powered Storage Market Valued $15.6 Billion in 2021 - CAGR of 26.7% from 2022 to 2031

 According to a report ($5,820) published by Allied Market Research, the global AI powered storage market valued $15.6 billion in 2021, and is anticipated to reach $162.5 billion by 2031, growing at a CAGR of 26.7% from 2022 to 2031.

The report offers a detailed analysis of the top winning strategies, evolving market trends, market size and estimations, value chain, key investment pockets, drivers and opportunities, competitive landscape, and regional landscape. It is a source of information for new entrants, shareholders, frontrunners, and shareholders in introducing necessary strategies for the future and taking essential steps to significantly strengthen and heighten their position in the market.

It offers a detailed segmentation of the global AI powered storage market based on component, storage system, storage architecture, storage medium, end user, and region. The It provides a comprehensive analysis of every segment and their respective sub-segment with the help of graphical and tabular representation.

This analysis can essentially help market players, investors, and new entrants in determining and devising strategies based on the fastest-growing segments and highest revenue generation that is mentioned in the report.

Based on component, the hardware segment held the major market share in 2021, holding nearly three-fifths of the global AI powered storage market share, and is expected to maintain its leadership status during the forecast period. However, the software segment, is expected to cite the fastest CAGR of 28.2% during the forecast period.

On the basis of storage architecture, the file and object based storage segment held the largest market share in 2021, accounting for nearly two-thirds of the global AI powered storage market share, and is expected to maintain its leadership status during the forecast period. Nevertheless, the object storage segment, is expected to cite the highest CAGR of 28.1% during the forecast period.

In terms of end user, the enterprises segment held the major market share in 2021, contributing to nearly two-fifths of the global AI powered storage market share, and is expected to maintain its leadership position during the forecast period. However, the government bodies segment, is expected to cite the fastest CAGR of 28.9% during the forecast period.

Based on storage medium, the HDD segment held the major market share in 2021, contributing to more than half of the global AI powered storage market share. However, SSD is expected to maintain its leadership position in terms of revenue and register the highest CAGR of 28.0% during the forecast period.

Region-wise, the North American region held the major market share in 2021, holding more than two-fifths of the global AI powered storage market share and is expected to maintain its leadership status during the forecast period. However, the AsiaPac market is forecast to record the fastest CAGR of 28.3% during the assessment period.

The key players analyzed in this market report include Advanced Micro Devices, Inc., Amazon Web Services, Inc., Cisco Systems, Inc., Dell Technologies Inc., Fujitsu Limited, Google, Inc., Hitachi, Ltd., The Hewlett Packard Enterprise Company, Intel Corporation, Lenovo Group Limited, Micron Technology, Inc., Microsoft Corporation, NetApp, Inc., The International Business Machines Corporation, Pure Storage, Samsung Electronics Co., Ltd., and Toshiba Corporation.

Key findings of the study:
• By component, the hardware segment dominated the AI powered storage market in 2021. However, the software segment is expected to exhibit significant growth during the forecast period.
• On the basis of storage system, the DAS segment dominated the AI powered storage market in 2021. However, the NAS segment is expected to witness the highest growth rate during the forecast period.
• On the basis of storage architecture, the file and object based storage segment dominated the AI powered storage market in 2021. However, the object storage segment is expected to witness the highest growth rate during the forecast period.

 

Tuesday, February 14, 2023

WW 4CQ22 HDDs Units Slip 6% at 36 Million

With units slipping 2% Q/Q to 15.35 million HDDs, an 8% reduction in sequential nearline units resulted in Seagate‘s 5% Q/Q capacity reduction to 112.52EB even as average nearline capacity held flat at 15.5TB. Performance enterprise rose 4% Q-Q while solid branded increases, illustrating improved retail spending, offset declines in other categories such as surveillance (in 3.5″ CE). Average HDD capacity of 7.3TB dipped 3% sequentially on the reduced contribution of nearline HDDs; however, Seagate posted the smallest percentage reduction for nearline unit shipments, resulting in a nearly 10 percentage point share gain to 51%. Total HDD share of 42% rose just under 1.5 percentage points Q/Q.

Toshiba’s slightly higher sequential unit shipments of 8.02 million were countered by a 19% Q/Q drop in nearline HDDs, driving a 14% reduction in total capacity shipped, falling to 31.89EB. Average nearline capacity of 13.1TB slipped only 100GB from prior quarter. Performance enterprise shipments lifted a solid 17% sequentially, indicating that OEM and channel weakness plaguing this category in prior quarters had eased. The client
categories of desktop and mobile held largely unchanged from the prior quarter but a large 57% Q/Q increase of 2.5″ CE HDDs drove a 23% total CE unit rise. Average HDD capacity of 4.0TB fell from 4.6TB in 3CQ22 on the lower nearline contribution to the total. Total HDD shipment share of 22% increased more than one percentage point Q/Q.

Western Digital experienced a large 44% drop in sequential nearline HDD unit shipments with the segment’s capacity shipments cut nearly in half over the same period. The nearline reductions followed on the heels of the company’s competitors’ declines in the prior quarter. Total unit shipments of 12.89 million HDDs and 81.28EB fell 12% and plunged 37% Q/Q, respectively. Nearline shipments of 3.41 million and 52.83EB cut its market share to 32% for units and 33% for capacity shipped – all markedly lower Q/Q. Solid client HDD increases in desktop and mobile were driven by healthier seasonal branded sales, while market-wide surveillance demand weakness cut the company’s 3.5″ CE units. Average HDD capacity of 6.3TB plunged 28% sequentially while total market share shed nearly 3 percentage points from the prior quarter, falling to just under 36%

Thursday, December 08, 2022

Is Air Gap on your Superhero IT Utility Belt?

Each day IT admins work diligently behind the scenes to keep their systems running, their workforce productive, and their customers happy. But they are also secretly Caped Crusaders ever vigilant for the next cyber threat to their IT infrastructure.

Here’s three reasons why Air Gap should be a part of your Superhero Utility Belt. Air gap is any IT system, digital device, or storage media such as tape or hard drive disk (HDD) that is disconnected from the network, making it automatically “air gap” protected from malware.

1. Air Gap is a Cloak of Invisibility - There are countless ways ransomware infiltrates and seeks out IT systems to paralyze, corrupt, and steal data. Air gaped systems or media can’t be remotely hacked, turned on/off, encrypted, or wiped - because the ransomware can’t see it.

2. Air Gap can be Immutable - Sadly companies often do pay ransoms, only to find that their data is irreversibly corrupted or deleted. Data storage that supports WORM software prevents ransomware from modifying, encrypting, or deleting data. AES-256 encryption further adds another layer of data protection.

3. Air Gap is the Deep Freeze of Cloud Storage - Most cloud vendors offer cold storage solutions (tape or disks) where inactive data is stored offline (air gaped) indefinitely for backup, disaster recovery or regulatory compliance. Offline cold storage systems are not only the most cost-effective solutions, they also add the benefit of isolating huge amounts of replicated business data from cyberattacks.

 

Monday, October 25, 2021

ATTO Technology Announces Support for Apple’s Latest Operating System macOS 12 Monterey

All ATTO adapters, software, and utilities have been tested and validated with the new OS

 

ATTO Technology, Inc., today announced its hardware and software fully support the latest operating system from Apple, macOS 12 Monterey.

Rigorous performance evaluations have been done to ensure all ATTO products perform at the highest level with macOS 12. All ATTO adapters, software, and utilities have been tested and are already validated with Monterey. The tests were passed on both M1 and Intel®-based Apple machines.

“We’ve been working with Apple for many decades, now, and those deep roots show in how we were able to validate our products before Monterey was officially released,” said Timothy J. Klein, president and CEO, ATTO Technology. “Our goal was to make sure Mac users have what they need from us to keep producing the incredible things they do without missing a beat.”

ATTO Technology is a network and storage connectivity manufacturer whose products power high-performance, demanding workflows for media and entertainment, government, education, and scientific users. From Thunderbolt™ to Ethernet, Fibre Channel to SAS/SATA, ATTO products are the highest performing, most reliable and easiest to use connectivity solutions available for Mac environments.

Apple is the platform of choice for creative professionals who work with demanding design and digital production workflows where team collaboration is often essential. Digital assets continue to evolve in complexity, size and number which naturally leads to more and more data moving through networks. ATTO Technology supplies the connectivity purpose made to address these challenges.

Data density and complexity typify most workflows today, like in science and education where Apple computers are popular and widely used. The same technology from ATTO that Hollywood studios rely upon is equally effective across all industries.

Products in the ATTO portfolio supporting macOS 12 Monterey include:

  • ATTO Celerity™ 32Gb/s (Gen 7 and Gen 6), 16Gb/s (Gen 6) and 8Gb Fibre Channel host bus adapters (HBAs) with ATTO MultiPath Director
  • ATTO ExpressSAS® GT 12Gb SAS HBAs
  • Thunderbolt 3 and 2 enabled ATTO ThunderLink adapters
  • ATTO FastFrame™ 10/25/40/50/100GbE SmartNICs
  • Software including ATTO 360 Tuning Software, ATTO Xtend SAN iSCSI Initiator, ATTO ConfigTool, XstreamVIEW, Express NAV and QuickNAV