Posted by z3d in ArtInt (edited )

While there are a growing number of startups offering AI accelerators, many of them are more or less vaporware and the other big challenge even among those actually shipping products is their software stacks are very premature or an outright heaping mess. Surprisingly there's a company known as MemryX that was started out of the University of Michigan AI research that is both shipping actual hardware -- and at a decent price point -- and where the software stack is a pleasant experience that works on both Windows and Linux. Here are my initial experiences in testing out the MemryX M.2 module that features four of their in-house MX3 AI accelerator chips.

The MemryX MX3 chip is an AI accelerator rated for 6 TOPS compute power but up to 16 of the chips can be interconnected for delivering up to 96 TOPS compute power in total. The MX3 supports 4 / 8 / 16 bit weights, BFloat16, and this current generation chip can handle up to 10.5 million 8-bit parameters per chip. Each MX3 chip consumes 2 Watts or less and can be connected to the host via PCIe Gen 3 or USB 3.

At $149 USD the MemryX MX3-2280-M-4 is an interesting M.2 module for edge AI uses with smaller sized models. This M.2 module can be useful for desktops without any NPU support currently, assisting in AI software development and testing more non-CPU code paths, and low-power AI edge computing scenarios. It will also be very interesting to see what more comes out of MemryX in the future and their follow-on products with the MX3 chip being able to be interconnected up to 16 chips.


Website: memryx.com

1

Comments

You must log in or register to comment.

There's nothing here…