Archive for the ‘SAN’ Category
It is not a surprise now that SSD or Flash drives are now mainstream. The rotating hard drive is still around, but for specific use cases… mostly in “Cheap and Deep” storage, video streaming and archiving purposes. Even with 10TB densities, these drives are destined for the junk yard at some point.
Flash storage is approaching 32TB+ later this year and the cost is coming down fast. Do you remember when a 200GB flash drive was about 30k? It wasn’t that long ago. But with flash storage growing so quickly, what does that mean for performance? Durability? Manageability?
These are real-world challenges as flash storage vendors are just concerned with making them bigger, we as consumers cannot just assume that they are all the same. They are not… As the drives get bigger, the performance of flash drives start to level out. The good thing for us is that when we design storage solutions, the bottleneck is no longer in the SAN with flash. So we too can take the emphasis off of performance. The Performance conversation has now become the “uncool” thing. Nobody wants to have that conversation anymore. The bottleneck has now shifted to the application, the people, the process. That’s right! The bottleneck now is with the business. With networking at 10Gb/40Gb and servers so dense and powerful, this allows the business to finally focus on things that matter to the business. This is the reason we see such a big shift into the cloud, app/application development and IoT. Flash is the enabler for businesses to FINALLY start to focus on business and not infrastructure.
So, back to the technical discussion here…
Durability is less of an issue with large flash drives because of the abundant amount of cells available for writes and re-writes. The predictability of drive failures mitigates the need for management of the common unstable legacy storage.
Manageability is easier with SDS (software defined storage) and Hyper-converged systems. These systems can handle faults much better through the distributed design and the software’s ability to be elastic, thus achieving uptime that exceeds 5 nines.
So as flash storage grows, it becomes less exciting. Flash is paving the way to a new kind of storage,the NVMe.
Posted by yobitech on July 24, 2017 at 9:31 am under Backup, SAN, SSD. Comments Off on All Flash is Not the Same.
We (us men) are wired internally to achieve greatness. Whether it is having the fastest car or the “bad-est” laptop, it is in us want to get it.
Owning these high-tech gadgets and fast toys doesn’t necessarily make us better or faster. Most of the time it just makes us “feel” better.
In the storage business, SLC SSD drives or Enterprise Flash drives are the “Crème de la Crème” of all drives. Customers pay a premium for these drives, sometimes more than a well equipped BMW 3 series per drive. SAN vendors sometimes use them as cache augmentation or cache extensions while others will use them as ultra-fast/Tier 0 disk for the “killer app” that needs the ultra-low latency. Regardless, SSDs has captured the hearts of the power and speed hungry people. What isn’t always discussed is that the fastest drives in the SAN doesn’t always mean the fastest performance.
There are a few factors that can slow down your SAN. Here are a few tips to make sure you are optimized:
1. Plumbing – Just like plumbing in your house, water flow will always be at the mercy of the smallest pipe. So if you have 5” pipe coming into the house and a thin tube going into your bathtub, you will take a long time to fill that tub. Be sure to optimize the throughput by using the highest available rated speed network.
2. Firmware – hardware devices have software also. Not just on your computers. This “thin” layer of software is written specifically for hardware devices is called firmware. Make sure you are on the latest code and read the “README” file(s) included for release notes
3. Drivers – Devices have software inside the operating system called drivers. Even though devices have firmware, there is software that enables the operating system to use the devices. To give you a perspective of firmware vs. drivers; firmware is like the BIOS of the computer. It is usually the black screen when you turn on your computer that loads the basic information of your computer. Drivers are like the operating system of your computer. Like Windows 8 or OS X that loads on top of the hardware, drivers loads on top of the firmware of the device.
4. Drive Contention – Contention is when you over-utilize a device(s) or drive(s). A common mistake made is to put everything (applications and hosts) on the SAN and then run backups back onto the SAN. Although it may seem logical and economical, it does a disservice to the users data. First, all the data is in one place. SAN failure means loss of data and backups. Second, data first has to be read from of the drives, then written back onto the same SAN (usually the same set of drives). This can cause a massive slowdown of the SAN; regardless of what drives you have in the system
5. User Error – The most common and least talked about is user error. Probably because nobody ever wants to admit mistakes. Misconfigurations in the SAN or application is a common fault. Most people, men in general, will not read the manual and install by trial and error. The goal is if it works, it is good. This gives a false sense of security especially with systems becoming more and more complex. A misconfigured setting may never show up as a problem until much later. Sometimes catastrophic failures can be the result of overlooked mistakes.
If you follow these simple steps to tighten up the SAN, you will achieve greatness through your existing investment.
Posted by yobitech on July 10, 2014 at 11:09 am under General, SAN. Comments Off on Guys with the Fastest Cars Don’t Always Win.
As technical sales veteran in the storage field, I see people all the time. People who make wise decisions based on real-world expectations and people who buy on impulse. You might say that I am biased because I work for a vendor and although that might be true, I was also a consultant before I was in sales.
I operate under a different set of “rules” where my customer’s best interest comes before my quota. Here is a collection of the top 5 things you should never do when buying a SAN.
5. Buy out of a “good feeling”
Sales people are in the business of selling. That’s what they do, that’s their “prime directive”. It is their job to make you “feel good”. Make sure you do your homework and check every feature and quote item. This is so that you know what you are buying. A common practice is that sales people will put something in the quote thinking you may need it, but in reality, it may not ever be used. Make sure you talk to the technical sales person without the sales guy. Ask him for the honest opinion but be objective. Ask about his background as well so you know his perspective. A technical sales person is still a sales person, but he is more likely to give you the honest, technical facts.
4. Buy a SAN at the beginning of the vendor’s quarter
SAN vendors are in business to make money. They also operate under the same sales cycles. If the company is publically traded, you can look up when their quarters begin and end. Some align to a calendar year and some are fiscal. Here is the fact… You WILL get the best deal always at the end of a quarter. If possible, at absolute best deal at the end of the year (4th quarter). Since buying a SAN is usually a long process, you should align your research as well as your buying approval with the quarters. This will get you the best deal for your money.
3. Believe what a vendor tells you
I write this with caution because at some point you need to believe someone. As long as you keep in mind that the sales people that court you during the process has a quota to retire. The one that is willing to back up their claims by objective facts and real-world customer references are the ones that will most likely live up to expectations.
2. Buy a SAN without checking out their support
As a prospective SAN customer, once you are down to your final players, take some time to call their support. Perhaps at 2am on a Sunday or 6am on a Tuesday. See what kind of experience you get. A common mistake is that a SAN is purchased and things are running well, all is good. It is when there is an outage and you are trying to get support on the phone, that is not the time to test their support response. Check also what the industry says about their support. Other questions are, where is the support center located? Is it US based? Follow the sun?
1. Buy a SAN from a startup
I am a big fan of new and disruptive technologies. This is what makes us a great nation. The fact that we can have companies startups pop up overnight, they can also disappear overnight. Startups are great, but for a SAN that I am going to put my company’s “bread and butter” is not such a wise choice. I say this from experience as I have seen startups come and go. The ones that stay are the ones that are usually bought by the bigger companies. The others are just hoping to be bought. Usually 5 years is a good milestone for a SAN company to pass because by that time customers that made the investment in their products will be in the market again to refresh. If they make it past 5 years, there is a good chance they will be around.
Posted by yobitech on March 14, 2014 at 1:15 pm under General, SAN. Comments Off on The top 5 things you should never do when buying a SAN.
Far too many times I have bought something with much anticipation only to be disappointed. If it wasn’t the way it looked or what it was promised to do; it was something else that fell short of my expectations. I have to say that one of the few companies that go beyond my expectations are the ones I keep going back to. The one I like to frequently talk about is Apple. Their products often surprise me (in a good way) and the intangible features that brings a deep satisfaction way beyond what is advertised. The “new drug” for me is Samsung and Hyundai (cars).
American marketing plays the leading role in setting this expectation. It is the marketing that has become the “American” culture… The “must have” the newest, coolest and flashy-est toys that defines who we are. Unfortunately, marketing of these products almost always falls short of the actual product itself. We all seem to hang on the hope that these products will exceed our expectations. This is why “un-boxing” videos are so popular on YouTube. Product reviews and blogs are also a good way to keep companies honest and helping us with our “addictions” to our toys. This marketing culture is not only limited to personal electronics but is also true for products in the business enterprise as well.
Marketing in the Business Enterprise
The Backup Tape
I remember having to buying backup tapes for my backups. I have often wondered why and how they can advertise 2x the native capacity of the tape? How can they make that claim? For example, a SDLT320 tape is really a 160GB tape (native capacity). How do they know that customers can fit 320GBs on a 160GB tape?” After doing some research, the conclusion I came to was that they really don’t know! It was a surprising fact to me that they can make such a claim based on speculation. How can they do this and get away with? It is easy… It is what I call the “Chaos Factor”. This is when someone or something takes advantage of a situation to further their cause.
In the case of the backup tapes, they capitalize on a few things that facilitate the Chaos Factor:
1. The Backup Software and
2. The Business Requirements.
The Backup Tape “Chaos Factor”
1. The Backup Software
Tape manufacturers know this all too well. Backup software is very complex. Virtually all backup administrators are far too busy worrying about one thing; completing the backups successfully. Looking to see if tapes are being utilized to meet its advertised capacity is not something that is even thought about in the day-to-day operation. In fact, the only time tape utilization ever comes up is if management asks for it. When it is requested, it is usually a time consuming exercise as backup software does not have good reporting facilities to compile this information readily. Tape utilization is not a concern.
1. The Business Requirements
Another reason is how backup software uses tapes. Tape backups are scheduled by jobs. Most jobs are completed before the tape are filled up. Depending on the companys’ policy, most tapes are ejected and stored off-site. So tapes are rarely ever be filled up because of this policy! This is normal for backup jobs and it is when companies leave tapes in the drive(s) to fill them up goes against why they do backups in the first place. Backup tapes are meant to be taken off-site to protect from disaster. It is really the ONLY time (other than having backups larger than a single tape) that a tape can actually be fully utilized.
So this Chaos Factor is also used in the business of data storage. The SAN market is another one of where the protection of data trumps our ability to efficiently manage the storage. The SAN market is full of dirty secrets as I will outline them below.
The SAN “Chaos Factor”
A dirty secret of the storage industry is the use of marketing benchmark papers. Benchmark testing papers are designed to give the impression that a product can perform as advertised. And for the actual paper itself, it may be true, but sometimes these tests are “rigged” to give the product favorable results. In fact, sometimes these performance numbers are impossible in the real-world. Let me illustrate.. For example, I can type about 65 words per minute. Many people can and will view that as average, but if I wanted to “bend the truth”, I can say I can type 300 words per minute. I can technically type “at” 300+ words per minute, but in the real world, I don’t type like that. What good is a book with 1 word (at) printed on 300 pages? This kind of claim holds no water but it is the same technique and concept used for some of these technical papers. Although the results are touted, keep them honest by asking what their customers seeing in their performance on a day-to-day operation.
Here is another technique that is commonly used by vendors. It is what I call the “smoke and mirror” marketing. It is a tactic used to mimic a new technology, feature or product that is hot. The main goal of this is to create the feature at the best possible price and downplay the side-effects. This is the deliberate engineering around providing the feature set at the expense of existing features. Here is an example. I bought a new Hyundai Sonota last year. I love the car, but I am not crazy about the ECO feature that comes with it. I was told that I would save gas with this mode. Although I have to say I think I get a few more miles on a tank of gas, the cost I pay in lost power, torque and responsiveness is not worth me using this feature at all. I believe this feature as well as a smaller gas tank capacity eventually lead to a class-action law suite over Hyundai’s gas mileage claims. So for a vendor to incorporate new features they sometimes have to leverage existing infrastructures and architectures because it is what they already have. In doing so, they now have an inferior product by emulating new features and masking or downplaying the effects. The prospective customers are not going to know the product well enough to know the impact or these nuances. They often just see the feature set in a side-by-side comparison with other vendors and make decisions based on that. While the details are in the fine print, it is almost never looked at before the sale of the product. As a seasoned professional, I commonly do my due diligence to research their claims. I also am writing this to help you avoid making these mistakes by asking questions and researching before making a major investment for your company.
Here are some questions you should ask:
• What trade magazines have you been featured in lately? (last year)
• What benchmarking paper is available for review
• How does that benchmark compare to real-world workloads?
• What reference architectures are available?
• What customers can I talk to on specific feature set(s)?
Here are some things to do for research
• Look through the Administrator’s Guide for “Notes” and fine print details. This will usually tell you what is impacted and/or restricted as a result of implementing the features
• Invite the vendors for a face-to-face meeting and talk about their features
• Have the vendor present their technologies and how they differ from the competition
• Have the vendor white-board how their technology will fit into your environment
• Ask the vendor to present the value of their technology in relation to your company’s business and existing infrastructure
• If something sound too good to be true then ask them to provide proof in the form of a customer testimony
I hope this is good information for you because I have seen time after time, companies making a purchases into something that isn’t the right fit. Then they are stuck with it for 3-5 years. Remember, the best price isn’t always the best choice.
Posted by yobitech on March 4, 2014 at 2:53 pm under Backup, General, SAN. Comments Off on Read the “Fine Print”.
As you may remember when SATA drive technology came around several years ago, it was a very exciting time. This new low cost, high-capacity, commodity disk drive revolutionized the home computer data storage needs.
This fueled the age of the digital explosion. Digital photos and media quickly filled hard drives around the world and affordably. This digital explosion propelled companies like Apple and Google into the hundreds of billions in revenue. This also propelled the explosive data growth in the enterprise.
The SAN industry scrambled to meet this demand. SAN vendors such as EMC, NetApp and others saw the opportunity to move into a new market using these same affordable high-capacity drives to quench the thirst for storage.
The concept of using SATA drives in a SAN went mainstream. Companies that once could not afford a SAN can now buy a SAN with larger capacities for a fraction of the cost of a traditional SAN. This was so popular that companies bought SATA based SANs by the bulk, often in multiple batches at a time.
As time progressed, these drives started failing. SATA was known for their low MTBF (mean time before failure) rates. SATA SANs employed RAID 5 at first to provide protection for a single drive failure, but not for dual drive failure.
As companies started to employ RAID 6 technology dual drive failure protection would not result in data loss.
The “Perfect Storm” even with RAID 6 protection looks like this…
– Higher Capacity Drives = longer rebuild times: The industry has released 3TB drives. Depending on SAN vendor, this will vary. I have seen 6 days for a rebuild of a 2TB drive
– Denser Array Footprint = increased heat and vibrations: Dramatically reducing MTBF
– Outsourced drive manufacturing to third world countries = increase rate in drive failures particularly in batches or series: Quality control and management is lacking in outsourced facilities resulting in mass defects
– Common MTBF in Mass Numbers = drives will fail around the same time: This is a statistical game. For example, a 3% failure rate for a SAN in a datacenter is acceptable, but when there are mass quantities of these drives, 3% will approach and/or exceed the fault tolerant of RAID
Virtualized Storage = Complexity in recovery : Most SAN vendors now have virtualized storage, but recovery will vary depending on how they do their virtualization
– Media Errors on Drives = Failure to successfully rebuild RAID volumes: The larger the drive the chance of media errors become greater. Media errors are errors that are on the drive that renders small bits of data to be unreadable. Rebuild of RAID volumes may be compromised or failed due to these errors.
Don’t be fooled into having a false sense of security but having just RAID 6. Employ good backups and data replication as an extension of a good business continuity or disaster recovery plan.
As the industry moves to different technologies other new and interesting anomalies will develop.
In technology, life is never a dull moment.
Posted by yobitech on February 25, 2012 at 10:13 pm under Backup, General, RAID, SAN, SAS. Comments Off on The Perfect Storm.
People often ask me the question, ”What’s the difference between a Seagate 1TB 7.2k drive and a Western Digital 1TB 7.2k drive?” and I usually say the manufacturer…
Other than some differences in the mechanism, electronics and caching algorithms, generally, there is not much that is different between hard drives.
Hard drives are essentially physical blocks of storage with a designated capacity, set rotational speed and a connection interface (SCSI, FC, SAS, IDE, SATA, etc…). Hard drive performance is usually measured in IOPS (Input/Output Per Second) for each hard drive. For example, a 15k RPM drive will yield about 175 IOPS per drive, while a 10k RPM drive will yield about 125 IOPS per drive.
In the business setting, most companies store their information on a SAN (Storage Area Network). A SAN is also known as an intelligent storage array. An intelligent storage array is commonly made up of 1 or more controllers (aka, Storage Processors or SP) controlling groups of disk drives. In the SAN, there is intelligence. They are in the controllers (SPs).
This “Intelligence” is the secret sauce for each storage vendor. EMC, NetApp, HP, IBM, Dell, etc. are examples of SAN vendors and each will vary on the intelligence and capabilities in their controllers. Without this intelligence, these groups of disks are known as JBOD (Just a Bunch of Disks). I know, I know, I don’t make these acronyms up.
Disks that are organized in a SAN work together collectively to yield the IOPS necessary to produce the backend performance to provide the service levels driven by the application. For example, an application may demand 500 IOPS on average, how many disk drives do I need to adequately service this application? (This is an over simplified example for a sizing exercise, for there are many factors that come into play when it comes to sizing, for example, RAID type, read/write ratios, connection types, etc.) With each hard drive producing a set of set IOPS, is it possible to “squeeze” more performance out of the same set of hard drives? The answer is yes.
Remember the Merry-Go-Round? Why is it always more fun on the outside that the inside of the ride? We all knew as kids that we always want to be on the outside of the ride screaming because things seem to be faster. Not that the ride spun any faster, but it was because the farther out you were from the center, the more distance is covered in each revolution.
The same theory is true with hard drives. The outer tracks of the hard drive will always yield more data per revolution than the inside of the hard drive. More data means more performance. By utilizing the outer tracks of a hard drive to yield better performance is a technique known as “short stroking” the disk. This is a technique that is utilized in a few SAN manufacturers. One of the vendors that does this is Compellent (now Dell/Compellent). They call this feature “Fast Track”. Compellent is a pioneer in the virtual storage arena trail blazing next generation storage for the enterprise. Another vendor that does this is Pillar.
So at the end of the day, getting back to that question, “What’s the difference between a Seagate 1TB drive and a Western Digital 1TB drive?” For me, my answer is still the same… but it is how the disks are intelligently managed and utilized by will ultimately make the difference.
Posted by yobitech on April 20, 2011 at 11:09 am under General, NAS, SAN, SAS, SCSI. Comment on this post.
|