• expired

Synology 10GB Ethernet + M.2 Adapter Card E10M20-T1, RJ-45; 1 Port $378.34 + Delivery ($0 with Prime) @ Amazon US via AU

100
This post contains affiliate links. OzBargain might earn commissions when you click through and make purchases. Please see this page for more information.

M.2 SSD & 10GbE combo adapter card for performance acceleration
Achieve Multi-Gig Ethernet transmission speeds of up to 10 Gbps
Accelerate random I/O performance with the dual M.2 2280/22110 NVMe SSD slots
Dedicated cache slots, free up primary drive bays for data storage

More info:
https://www.synology.com/en-au/products/E10M20-T1

10GbE bandwidth and M.2 SSD cache, all at once
Synology E10M20-T1 helps you boost I/O performance and network bandwidth simultaneously using just one PCIe expansion slot on your Synology NAS.

Compatible models: (From Synology Website) https://www.synology.com/en-au/products/E10M20-T1
SA series:SA3600, SA3400
20 series:RS820RP+, RS820+
19 series:DS2419+, DS1819+
18 series:RS2818RP+, DS3018xs, DS1618+

Next cheapest price in Australia $449.00

Edit:
Typical Use Scenario
2x M2 Cards for Data Caching (1x Read, 1x Write)
1x 10gbE Ethernet Port (RJ45)

This allows you to get read/write cache + 10gbe ethernet without sacrificing a hotswap bays on models with only 1x PCI Express Expansion Slot.
Prior to this on Supported Synology Devices you only had one expansion slot, and hence forced to decide between an M2 drive or 10gbe ethernet.

Price History at C CamelCamelCamel.

Related Stores

Amazon AU
Amazon AU
Marketplace
Amazon Global Store
Amazon Global Store

closed Comments

  • Is this expansion card for Synology devices? Which series is it compatible with?

    • It's just using a pcie slot so should work with anything, might need specific drivers tho compared to a synology being plug and play

    • +1

      6 bay DS1618+ or 8 bay DS1819+

    • +2

      I had the same questions, but figured if I have to ask it's probably not for me.

      • I've updated the description

      • I originally asked why the card was so expensive, then I realised it was Synology branded and probably meant for their units, and edited my question appropriately. :P

  • +2

    …Plus you also need to have 10Gb ethernet network to make full use of it.

    • Or just another 10GB nic on other side, ie you main computer. Which is probably cheaper than going full 10Gb lan network.

      • This is interesting. I've always wanted to toy with the idea of 10GB Ethernet on the cheap. Do you have sample setup/config?

        • An end-to-end 10GBe doesn't make much sense. It can transfer data above 1000 megabytes per second, so even a serial file read from an m2 drive would have trouble to constantly saturate it, and this is already a very edge case (more like a synthetic test than anything, really).

          10Gbe is only practical for many-to-one connections, like lots of 1Gbe endpoints accessing a very beefy server.

          • +1

            @ocoolio: Try moving around multiple 4K RAW video files from workstation to Filestore, while importing the next batch of work to a RAID0 stratch drive. There are use cases for it.

            • @BlinkyBill: Could you elaborate on this a bit? So you import a new raw video, to work on it for … how long?
              Then the file export is what size? So you export it over the network (but why?) onto the storage of what size? And this process must be done on 10G and cannot be done on 1G because it needs to complete in N seconds instead of 10xN seconds due to an actual limitation of what?

              Your scenario is the definition of a very edge case as I mentioned. Do these exist? Sure. Do many people have them? Surely not.

              • +5

                @ocoolio: I run a small architecture studio, individual project files aren't anywhere near 'that' big, typically 200mb to 1gb. I upgraded our server to SSD to help with saving back lots of little data packets when staff sync files and its been amazing. Could have stopped there, so why did I upgrade to 10gbit? Well when ever some one hits sync, it saves the file to the server, and then sends the changes to who ever has the file open. Simplistically think, 500mb file copied simultaneously back and forth to 3 computers on the network. Before the SSD upgrade this process would take maybe minute everytime some one would press sync, people (my self included) would avoid doing it, as the lag would interrupt people's workflow. Now it's absolutely instant. Everybody syncs more, which means data integrity is up, which saves time, and saves money.

                Second reason was, I work off a Windows server. And files incrementally backup up to my synology at certain intervals. Although a daily snapshot is small 15-25gb. If you add say, daily backup, weekly backup, and a monthly backup. You start to hit the 1tb of data being copied over the network. I used to run this after hours, and it would take between 30min to a couple of hours. Now this syncs in less than ten minutes.
                Relatively cheap upgrade, $150 for each computer NIC, you can buy a QNAP switch for $1000, I already had cat6a in the wall, but it runs even on Cat 6 on my tests. I don't use any of the cache features on the synology as I dont need them, and this card was only $200 more than one without cache features. If our windows server ever went down, we could reroute our projects to run off our synology with just a hit on collaboration features but performance would be the same for end users.

          • @ocoolio: The sequential read/write speed for your run of the mill 970 EVO plus is about 3200 to 3500MB/s which would fully saturate a 28 gigabit connection.

        • Well you plugged both in directly and set a static IP in same subnet and should work.

        • For me the cheapest way to have the point to point 10GB network was to buy a pair of used MNPA19-XTR 10GB Mellanox SFP+ cards and the direct connect copper SFP+ cable, was around 120AUD in total. The maximum length of such cable is only 3-5 meters, to keep in mind. Was playing with distributed computing / deep learning on two systems, plus data usually stored on one system. 1GB network was bottleneck quite often, a fast nvme drive was sufficient to saturate the 10GB link.

          QNAP QSW-308-1C may be another interesting option to connect 3 10GB nodes with the rest of 1GB network, I'm thinking to switch my setup to.

          • @DmytroP: Didn't the MNPA19-XTR clog up the processing capacity of the machines (the internal bus, I mean)?

            For distributed computing, you can try using torrent to distribute the data to all the nodes before the processing. If the network was a bottleneck, you might have had an architectural issue? For instance, Spark can work on dozens of terabytes of data with barely any network use (1G rarely saturates and never an actual bottleneck; but poorly written jobs are :)).

            • +1

              @ocoolio:

              Didn't the MNPA19-XTR clog up the processing capacity of the machines (the internal bus, I mean)?

              Should not, it uses much less bandwidth comparing to 4 GPUs on the bus. At least I have not noticed any significant impact.

              For distributed computing, you can try using torrent to distribute the data to all the nodes before the processing.

              It depends on the task, sometimes computation involves frequent syncing of large amount of data between nodes just due to nature of the task/algorithm, for example on each step parts of data is processed on multiple GPUs/nodes, quite a large intermediate results has to be combined via network before the next iteration can start. Sometimes it's more convenient and much cheaper to mount the dataset partition via 10GB link comparing to buying another large nvme drive, etc.

    • +1

      I think without 10gbit networking there would be little point to caching either. A synology should be able to easily saturate at least 1 gigabit link with just mechanical drives.

  • Nice the big daddy version 10GbE + M.2. Up until this model it's been one or the other.

  • Thanks. First, apart from this card, you have to add the M2 storage sticks too so that's probably another $100-150 (those Crucial P1 deals maybe?)
    Then for 10Gb/s networking, cheapest I found for managed switches is the Mikrotik CRS305 which is around USD $140 but it only has 4 bare SFP+ ports and have to add the 10GbaseT converters (if you want to use CAT6/6A cables) Do note that this Synology card has a RJ45 plug so you need a CAT 6/6A cable anyway to plug the Synology to a switch. Do be careful in choosing those 10GbaseT converters and make sure they can do multiple speeds 1G/s, 2.5Gb/s, 5Gb/s and 10Gb/s. Some don't . Same for the NICs that goes to the PC. You want flexibility. Personally my use case is to connect my several PCs to the NAS for photo post processing. I've found that my computer's can't saturate the 10G link individually but multiples can. YMMV. Also if you can use DAC cables they are great and cheap (~ $40 each and one DAC covers both ends) but limited in length. Last, if you decide to use 10GbaseT converters they all run very hot >70C so good ventilation is needed. I have tried converters from fs.com, 10gTek, Mikrotik. All around $80-100 AU stock except the 10gTek. They all work to spec.

  • Whilst this exact part isn't relevant for my consumer 1019+, I'm keen to hear how many people are using SSD caches in their Synologies and whether it's been worthwhile.

    I've got a read only cache in my 1019+ with a 90% hit rate on average since I mainly run VMs/Dockers 24/7, but am debating if I want to also add a second SSD for the read/write cache and whether or not it would really improve my daily experience (or if the bottlenecks are elsewhere). Thoughts? There doesn't seem to be much info online beyond people discussing the old DSM issues where it crashed when cache got full.

  • I wish Ubiquiti had cheaper 10Gbit switches. :(

    • How many of your devices actually need 10GB? I've got one of their switches with two SFP+ ports which is good enough for me.

    • agree or more copper ports like the QNAP range!

  • To my knowledge Synology only lets you use SSDs for caching. QNAP lets you use them for caching OR direct access, with the latter being vastly preferable in many use cases.

    This is for consumer level devices at least.

Login or Join to leave a comment