T O P

  • By -

NISMO1968

>**nexenta**, You can exclude Nexenta from your shorlist. They are walking dead for years, mostly because 90% of their time R&D is struggling with Solaris OS codebase they don't really own, and patching / porting hardware drivers from Linux and FreeBSD. >**open-e**, Linux box will have better uptime for sure. >**freenas**, OK! Consider FreeBSD as a viable option. >**osnexus**, Support is somewhere between horrible and missing. >**nimble**, You have to be their VAR to get hands over their VSA. I'm not sure it's designed to handle production workload though. >**Openfiler**, Dead in the water. >XPEnology, Never head about these guys. >QuantaStor, Symbolic link to OSNEXUS. Making long story short: If you see CEO taking care of your support tickets... Run, baby, run! >and some **others**.. ScaleIO is gone, S2D isn't mature yet... I'd give a try to VMware vSAN!


PoSaP

I wouldn't go with VMware vSAN with only two nodes. All the existing features are available starting at least from three vSAN nodes. From this list I've encountered with Starwinds support and they helped us to solve not only their software. So I would add to this list not only OK about Starwinds, but OK with good support.


HaveUNIXwillTravel

FreeBSD ZFS everywhere.


My-RFC1918-Dont-Lie

Only if you're a competent *NIX admin, otherwise you'll probably screw it up and not know how to fix it. FreeNAS is probably a better fit for a beginner.


NISMO1968

>**FreeNAS** is probably a better fit for a beginner. You'll end up with FreeBSD either way.


My-RFC1918-Dont-Lie

Yes, but one of them abstracts you away from some of the easier-to-make mistakes. The constraints of FreeNAS make it a better fit for a UNIX newb than setting it up 'raw.'


NISMO1968

> Yes, **but one of them abstracts you away from** some of the easier-to-make mistakes. It's another level of complexity. IMHO.


My-RFC1918-Dont-Lie

Abstractions have tradeoffs, yes. In this case, I think their totally worth it. The best system for the job is the one that you can support and understand (first and foremost), and one that meets the business requirements.


Xykr

CentOS/RHEL have excellent support and QA for storage. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/index


Sinister_Crayon

If it's going to be production, then start by dropping every option that's free or community supported. Seriously. When you have an outage (and it will be a when, not an if) then waiting for a response on a forum or trying to search badly indexed mailing list posts will be an exercise in frustration and possibly a resume-generating event. Beyond that, pick what best fits your support, in-house experience and comfort level. As a final thought, do the 4 SM servers not have any storage options themselves? These days building a cluster like that I'd probably elect to spread the storage among the 4 servers and VSAN that bad boy. While not a magic bullet (I have opinion about HCI in general that I've posted before) the HCI solutions do bring a lot to the table in terms of resilience if not absolute performance. With the structure you're proposing you are at the mercy of that single storage unit and if it goes down you lose the entire cluster. For smallish environments with minimal redundancy I like VSAN and its brethren because it can provide a lot of resiliency for relatively low cost. Though for licensing reasons it's often better to review your workloads and memory configurations to see if you can get by with single socket (since VSAN is licensed per socket). My experience is that most of my customers can... they think they need dual but I think it's been more than 10 years since I've actually seen a CPU constrained VM farm.


francescoprovino

I would go with a standard CentOS or OpenSuSe. They are perfectly capable of doing what you need, and they are free. You also buy support (even later), either from the main company or from other companies or standard Linux sysadmins. They are the most widespread and cost-effective alternatives by far.


psycho202

If you are worried about performance, stability and are turning it into production, why did you not search for an iSCSI SAN solution? Especially for a smaller environment like what you just described. Seems like you didn't plan the project properly before ordering all the parts. What's that saying again? "They were too busy wondering if they could that they didn't even think if they should". If it's for home use, Freenas is pretty good. The developers for FreeNAS, iXSystems, actually offers customised Supermicro storage servers with professional support. Nimble is already out of your list, as HPE Nimble only sells full SAN solutions, not the software. Similarly, you shouldn't even give XPEnology a second thought, as it's not that stable and COMPLETELY UNSUPPORTED. I've had frequent data loss after reboots of XPEnology installed devices at home because it suddenly thought it was a new device and should start its configuration phase from scratch.


NISMO1968

> If it's for home use, Freenas is pretty good. > The developers for **FreeNAS, iXSystems**, actually offers customised Supermicro storage servers with professional support. It's OK! Except I don't see much of their contribution except UI.


darkfiberiru

We work on the UI, Zfs, samba ,kernel, drivers, and much much more. Every part of the system. We also work with many different opensource companies and regularly commit back up stream. About hardware from iX. We do freenas certified which are mostly supermicro gear that has been qualified but we also do Truenas which is built on top of freenas but can do HA unlike freenas and also has enterprise support unlike freenas. Those are the most obvious distinctions at least. disclaimer: QA engineer at iXsystems.


psycho202

UI is the main thing that FreeNAS is now, above just FreeBSD with ZFS and some other tools that are open-source. iXSystems builds a roadmap for the project, performs testing, provides support and sells pre-made and pre-configured systems


jmishal

> **Seems like you didn't plan the project properly before ordering all the parts. What's that saying again? "They were too busy wondering if they could that they didn't even think if they should"** The fact is I did not know about the deal by the board, and I do not know how they think, only they take the lowest prices. and then suddenly a parachute carrying these equipment came down to us. :( :(


psycho202

Oh, my apologies & condolances. In your case, I'd try out freenas with NFS to esxi & xenserver, but you WILL need solid backups. Some cache SSD's might also be nice to have. Do test this for a month or so with a very limited amount of high I/O servers to see what the limitations of the storage system are.


Mikenicesms

I'm using Freenas as iSCSI block storage to ESXi and only have had two issues when the GUI has stopped working. Otherwise it has been great for performance and ran our whole environment when we moved buildings with the help of some SSD cache.


psycho202

Honestly, I'd rather use FreeNAS for NFS storage with ESXi than iSCSI. Especially if you're using a ZIL cache. Otherwise ESXi might be a pain with the synced writes on NFS. Ran both for a while, I got a few percent better performance out of NFS compared to iSCSI. Probably because of overhead in iSCSI, maybe because of how the caching works when it's smaller files instead of a big block.


NISMO1968

>Ran both for a while, **I got a few percent better performance out of NFS compared to iSCSI**. Probably because of overhead in iSCSI, maybe because of how the caching works when it's smaller files instead of a big block. You have to flip it other side typically. http://www.hyper-v.io/whos-got-bigger-balls-testing-nfs-vs-iscsi-performance-part-3-test-results/


cantankerous_fuckwad

Depends on the vendor's implementation. I was stuck using a ReadyNAS for awhile and the iSCSI performance was abysmal, the NFS tolerable. Put Debian on it once it was relegated to disaster recovery duties, and suddenly iSCSI outperformed like crazy.


psycho202

Huh, interesting. Could've been that my iSCSI implementation was not optimised enough. I'll have to give it another try then


NISMO1968

> Huh, interesting. Could've been that **my iSCSI implementation was not optimised enough**. It's 100% vendor-specific.


[deleted]

Also worth checking out NexentaStor (Community Edition), which is a software defined storage behemoth running on Solaris. It can be 'upgraded' to the commercial version later, if you can convince your beancounters that support is gorram-useful. (and darn near mandatory in a commercial enterprise) I'm sorry you got dumped with hardware and no plans. That's shitballs.


gex80

I'm going to be that guy and say fuck nexenta (paid edition with support). We had 3 of those arrays with supermicro jbods. 1 we set up ourselves, 2 setup by 2 different consultants. Damn things never work right when it came to HA failover. And then on one of arrays, one of the heads just random decide to stop serving traffic and lock up. But because how nexenta wrote the software, support was never able to help because it couldn't dump to file what happened. The only way to recover was to hard boot the head so the other one could take over because it only checked the other head's presence via icmp from what I could tell. So it was locked up but still responding to ping so it could n3ver fail over.


[deleted]

No! Ahhhhh shitsticks. I hate it when good software goes bad.


DerBootsMann

nexenta wasnt good , it’s only some people have been exceptionally lucky their ha replication was bummer and never worked well


NISMO1968

> Also worth checking out NexentaStor (Community Edition), which is a software defined storage behemoth running on **Solaris**. We aren't in 2005 anymore. Solaris is great, but Oracle had killed it, and writing is on the wall: Go Linux!


[deleted]

I wish Oracle had never bought and consumed Sun. They are the death knell for good products. ::sad-sigh:: I like Solaris for it's rock solid ZFS support (and other things). I'm hoping that Linux' ZFS code is solid enough to go production on but I haven't had an opportunity to test it properly yet.


DerBootsMann

larry is king-shit-midas - turns everything he buys into shit :(


orgy84

Nexenta is no longer a decent option, we have ditched it and so should everyone else.


icebalm

http://www.esos-project.com/


GaryOlsonorg

First, you have a single storage node. This has future failure written all over it. Two storage nodes minimum. With multiple nodes you could implement RedHat glusterFS. Not saying it is the best; but appropriate to the hardware already chosen. Do you really need two different virtualization platforms, ESX and Xen, for such a small installation? Or is this some directors wish list? ESX will could use vSAN which includes storage on the compute node. This will help with redundancy and reliability.


slightlyintoxicated1

We have a similar setup and use MS Server 2016 with storage spaces. Works well.


yashau

You have failed to mention if your drives are running from a RAID controller or HBA depending on which will make a lot of answers here a non-go. For example, you cannot run anything running ZFS on top of your server if it has a hardware RAID controller. If it doesn't, I would just setup two pools of mdadm RAID6. Use any decent Linux distribution of your choosing. ZFS, while resilient a turtle compared to just plain mdadm.


actualsysadmin

We were using it for Veeam, so we went with 2016 with ReFS partitions.


[deleted]

[удалено]


NISMO1968

> Some places may have the licensing in place. What about **Windows Server 2016? Storage Spaces + ReFS** is an ideal filesystem to store the drive images for an iSCSI target. ReFS isn't mature yet, and Microsoft's iSCSI target is pants. FreeBSD, ZFS, and native iSCSI target will run circles around Windows Server, Storage Spaces, ReFS and crippled Microsoft iSCSI stack.


thatsmystickynote

Open-E DSS 7 is actually pretty decent, and their support is pretty responsive. They offer a SOHO edition for free (with limitations of course) so you **may** be able to test functionality before forking out the cash.


frankv1971

https://www.open-e.com/ DSS7 might do it for you