Jump to content

HD VIDEO NAS


lude
 Share

12 posts in this topic

Recommended Posts

Howdy!

 

This being my first post here I am not sure this is the right place for it, so if it´s not please move it to the right place.

 

Ok I want to build a NAS on Linux that both OSX, XP and Linux can talk to. It needs at least 2Gbs. I am thinking maybe AoE on multiport gigabit. I will only have 2-3 machines connected to this shared storage and only one of them needs superfast access, FCP on osx reading and writing uncompressed HD. So If I then mount the HFS+ partitioned RAID in linux too and share that over normal gigabit to the two other(XP, Linux, osx...) machines I would have a working solution? I have no Idea if that works though...

 

All ideas is very appreciated!

 

I am completely new on this and I should probably just buy a very expensive system somewhere, but guess what? I cant afford it.

 

Lude

Link to comment
Share on other sites

I am building one of these for a client right now. 4gigabit nics, 8.4TB of storage - using an Intel SSR212C storage server and 12 750gb drives.

 

Getting the network adapters teamed up is easy - all the docs are available on Intels support site - the only catch is every adapter has to be an Intel server gigabit adapter and for top performance you have to use a managed switch (for IEEE load balance tags).

 

Make the server an iScsi initiator (faster than SMB) and your set.

Link to comment
Share on other sites

Wow that sounds great, as I am new to this I have some questions.

 

Are you running Linux on the server?

 

Are you connected to the server with multiport (smalltree??) ethernet from macpro?

 

What software does the clients need for iscsi(and what about blazeFS and AoE?)

 

Do I need a switch or can i use twistedpair and connect straight to the server via iscsi for multiport and use a hub/switch for single gigabit smb(or similar protocol)?

 

I am thinking to buy a cheap case with 16x sata2 hotswap and pop in a motherboard with xeon and a multiportNIC( or is it as good with several NICs?) and a raidcontroller. But that Intel one shure looks sweet!

 

What does this config cost? Please break it down if you want.

 

Thanks a mil in advance!

 

Lude

Link to comment
Share on other sites

1: Yes

2: I have no idea what that is :blowup:

3: They need an iScsi driver /software . Because iScsi is as fast as you can go without FibreChannel (which uses iScsi anyway).

4: You must use a switch - the intel load balancing kernel modules rely on a switched infrastructure.

5: if you need proper uptime you need onsite support - so buy a single unit that your sure will work.

6: all up the client is paying around $16000 Australian (14k or so US). The largest expense is the SSR212 + drives (12x750gb is not cheap).

Link to comment
Share on other sites

2: Let me rephrase that... what NIC card does the client use to speak with the server?

Small tree is this BTW www.small-tree.com/products/GB-cards/PEG4.htm

 

Do you have any suggestions on a cheapish switch with few ports that has all the jumboframe stuff I need?

 

Again... Thanks a MILLION for your help.

Link to comment
Share on other sites

I am using dual port pci-x intel server cards - they are fast and reliable.

 

Cheap switches will be unmanaged - so they will be slow in this type of environment. Netgear make reasonably priced managed switches - you won't regret spending the extra when you see the bandwidth improvement. 'jumbo frames' doesn't really compare to 4+ adapters running in a team (load balanced).

Link to comment
Share on other sites

Ok cool!

 

I´ll dig into these things... one final question Is there any potential problem from the Intel OSX side of things? You speak only of the server and the servers NICcards... what card(or cards?) would you use in a MACPRO running OSX connected to this server you are setting up?

 

Lude

Link to comment
Share on other sites

You state you want access from OS X, Linux and XP. Do you mean more than one machine at a time using iSCSI or AoE? Because then the disk would have to be formatted with a multi-user FS, of which there is not one that is free and cross platform. I've built something similiar successfully using a multi-user transfer protocol instead of a multi-user FS. Specificially Apple File Protocol, though Samba woud work too but not as nice.

 

Gentoo Linux

Avahi (Linux version of Bonjour, HOWL is an older framework also for Linux)

Netatalk (AFP and Appletalk and Printersharing, Appletalk not needed)

mt-daap (Just for fun, iTunes music sharing)

 

I then installed a 5 disk SATAII RAID 5 with 500GB drives. It preforms well, but we only use DV (~3.5MB/S I think) but it works with many streams. HDVcam i think is around ~30MB/s, and I've seen disk reads around ~100MB/s to 120MB/s so you should be good with a couple of streams live. Gigabit is the limiting factor. I can't use full jumbo frames, instead I have the MTU at 7000 bytes (9000 is jumbo standard) because the onboard NIC on the server is a cheapy nVidia nforce and tends to overflow and drop packets at higher rates. I suggest buying a PCIx Intel gigabit instead.

 

All this installed on a Gigabit network supporting Jumbo frames.

 

I used the D-Link DGS-2208 8 port switch for $50, though there are better out there now.

Remember, use good cables and keep them short. At high data rates propagation delay could be a factor if you tend to use untrimmed looooong spools of cable.

Link to comment
Share on other sites

  • 1 month later...

Hi there,

 

Has anyone tested such a setup in a real-life situation already? How does it work in terms of stability and actual thoughput?

 

Lounger says he's seen 100-120 MB/s benchmarks - have you had a chance to try it with HD?

 

Curious to know how Jester's server is performing as well - have you got any news?

 

We have to upgrade our complete studio setup to HD, and are wondering if an "economical" network storage could work fine for high-end HD, or we'd better stick to a traditional RAID connnected straight to the workstation. Blackmagic's uncompressed 10 bit HD needs "only" 132 MB/sec per stream, after all...

 

Thanks a lot in advance:

 

Boris

Link to comment
Share on other sites

  • 1 month later...
 Share

×
×
  • Create New...