AMAX Releases [SMART]Rack P47 Deep Learning & Rendering Solution at SC17

Last week at SuperComputing Conference 2017, AMAX released the [SMART]Rack P47, a PetaFLOP-In-A-Rack solution featuring AMD CPUs and GPUs, designed for deep learning and rendering applications.

The [SMART]Rack P47 combines the power of 20x AMD EPYC™ Processors and 80x AMD Radeon Instinct™ GPUs, based on AMD’s revolutionary “Vega” architecture, to deliver one PetaFLOP of 32-bit FP single precision compute power at a stunning 30 GigaFLOPs/Watt. It is designed to address today’s most complex computational problems in Artificial Intelligence, Deep Learning, VDI, advanced rendering, compute and research at an unmatched performance-per-dollar and performance-per-watt level.


The [SMART]Rack P47 PetaFLOP-Performance-in-a-Rack solution made its first appearance at SIGGRAPH 2017 in July, when AMD CEO Lisa Su proudly unveiled the rack and described it as “the most beautiful server rack that has ever been built.”

The supercomputing-class performance of the fully-integrated rack is achieved by 20x ServMax™ P47 servers, designed with high scalability to support a variety of workloads such as virtual desktop infrastructure, deep learning, machine learning, advanced rendering and compute. Each ServMax™ P47 server supports 1x AMD EPYC™ CPU and 4x AMD Radeon Instinct™ MI25 PCIe GPUs, delivering 20,480 Stream Processors, 49.20 TFLOPS single-precision, and 98.40 TFLOPS half-precision performance. Integrated with AMD’s ROCm open software platform and MIOpen libraries, the ServMax™ P47 servers deliver superior performance with unmatched manageability for deep learning training deployments in datacenters.


The [SMART]Rack P47 solution also integrates advanced memory and storage technologies; unified 100Gb InfiniBand/ Ethernet HPC fabric for increased in-rack bandwidth and productivity; integrated rack level out-of-band management layer for remote server health monitoring and rack orchestration; and a high-density rack cooling system add-on for optimal efficiency in heat exchange.

“As a high-performance technology provider enabling enterprises to close the gap between scale up performance, compute density and cost, AMAX sees the [SMART]Rack P47 as a game changer,” said Julia Shih, VP of Business Development, AMAX. “Starting from a single ServMax™ P47 server, we can scale upwards to supercomputing-class performance by leveraging AMD EPYC CPUs, AMD Radeon Instinct, and AMD ROCm development tools to support Deep Learning, rendering, and a host of other applications.”

AMAX is now taking pre-orders for both the ServMax™ P47 server and the fully-integrated [SMART]Rack P47, with delivery estimated in Q1 of 2018.

Posted in AMAX News, Cloud Computing, Data Center, Deep Learning, Enterprise Computing, GPU Computing, HPC Computing | Tagged , , , , , | Leave a comment

Fortifying AMAX’s Commitment to Industry-Low Failure Rates: How Silicon Valley’s Leading OEM Appliance Manufacturer Achieves Quality

One of the distinguishing factors that keeps customers returning to AMAX as an OEM Appliance Manufacturing partner is how well we put our products to the test.  Seriously, we mean business!  Guaranteeing that products never leave our facility without being put through a battery of tests is our way of protecting your brand and bottom line. Quality translates into a rock solid, reliable product, ensuring fewer warranty claims, less downtime and disruption to your end users, and an overall better customer experience of your brand. For nearly 40 years, AMAX has built a reputation of upholding the highest standards, giving ISVs, corporations, and data centers peace of mind knowing every system AMAX ships will be of the highest quality.


How can we speak so confidently?  Years and years of perfecting our manufacturing and testing process.

Executing Quality

  1. AMAX Testing Automation (ATA) – ATA is AMAX’s proprietary test suite designed to flag significantly more component errors than standard tests. Does this mean our parts are faultier? No. It means our standards are exponentially higher than industry standards, and anything less than perfect we consider a failure and will not ship out. ATA can be customized into a very specific test program using applicable diagnostics to evaluate performance against precise function requirements. It is able to capture all test and device logs for simplified data collection. Most uniquely, ATA is capable of testing large volumes simultaneously, allowing us to achieve aggressive manufacturing schedules while still maintaining quality.
  2. AMAX ISO 9001 5-Stage Quality Gate – our thorough 5-Stage quality process begins with testing individual components prior to manufacturing, followed by assembly based on centralized manufacturing process instruction to ensure accuracy and consistency of builds, then our proprietary ATA high-temperature burn-in stress test, Functionality Validation Testing (FVT), and a final QA inspection.
  3. Optional Add-On Tests – because our test programs are entirely customizable based on our customers’ needs, we have additional tests that can be administered during production: Highly Accelerated Life Test (HALT), Design Verification Test (DVT), and Ongoing Reliability Test (ORT).

img_1553bBut it doesn’t stop there.  Recently, AMAX has invested in several Environmental Stress Screening (ESS) chambers to stress systems through a range of temperature settings. Industry reliability numbers have shown that simply doing a room temperature test or only a slightly elevated heated test will not catch all faulty or potentially weak components.  Therefore, a combination of both hot and cold stress tests in a controlled environment is the only way to ensure that all integrated components are thoroughly tested and likely to perform as needed in the field. With a temperature range between -30°C to over 100°C and at a 10°C/minute ramp rate, the new ESS chambers allow us to better simulate a variety of testing situations. A few more specifics about the ESS chambers:

  • Testing at cold temperatures will find any marginal connectivity issues in the system
  • Warmer temperatures will identify any network failures for borderline high speed signal devices
  • The ESS chambers are averaging about an 85% yield with most devices passing; however they are catching an additional 15% of failures that were not captured with earlier testing
  • Capacity of the ESS chamber is 40x 2U units at full capacity and up to 70x 1U units, depending on the rack design

ESS testing is an add-on service for customers who must have the most reliable and optimally-performing systems in the field. Once ESS testing is complete, systems progress through final stages of the 5-Stage Quality Gate into Final Test, where proper customer images and settings are verified, and logs are deleted before packing.

As many of today’s leading technology companies rely on AMAX to build the server and rackscale appliances that feature their company name, any quality or hardware performance issues can throw a wrench in customer adoption or negatively affect their brand. This is why AMAX strives to deliver platforms that are reliable beyond industry standards, and offers a slew of value-add services (Custom Branding, New Product Introduction Program (NPI), Global Logistics, etc.) to help its partners succeed fast and grow quickly.



Interested in partnering with us for your OEM appliance needs?  Learn more about our OEM Appliance Program here. We can’t wait to work with you!

Posted in AMAX News, AMAX Services, Engineering, ISV Appliances, Server Appliance Manufacturing, Server OEM, Total Computing Solutions | Tagged , , , | Leave a comment

Introducing Our Newest AMAX Family Member, Heart-Melting Specialist

On Monday, June 26th, a member of the AMAX family arrived to work and heard a weak, high-pitched cry coming from beneath a car. Upon investigation, he found a straggly gray ball of fur, which turned out to be a malnourished and terrified 4-week old kitten. A severe eye infection had left him blinded with both eyes sealed shut.


They say it takes a village to raise a child, and it was no different with this little guy. The AMAX family immediately jumped into action, warming him with a towel, cleaning him up, and feeding him kitten formula from a tiny baby bottle.


After a visit to the vet deemed him healthy outside of the eye infection, the AMAX team has spent the next weeks nursing him back to health.

We are proud to introduce the newest member of the AMAX family: Matrix GPU On-Premise Cloud (Powered by Bitfusion Flex), or Neo for short.


We hope we can keep him as the AMAX office cat and mascot, as his presence has melted the hearts of all who have nursed him and held him sleeping in their laps, and brought a new level of camaraderie, compassion, and joy to the office.


If you would like to donate to the care of little Neo, we hope you will consider procuring one of our MATRIX Deep Learning Solutions. Not only is MATRIX the best solution on the market for AI development and deployment, but proceeds go to keeping a roof over our little guy.collage1b

Posted in AMAX News, Deep Learning, GPU Computing, Product Development | Tagged , , , | Leave a comment

Fresh off GTC 2017: The Revolutionary MATRIX GPU Cloud Solution

Ten years have gone by since GPU-accelerated computing was first introduced. This year at GPU Technology Conference 2017, advances in GPU computing and methods have culminated into the most ambitious and far-reaching technical endeavor yet—Artificial Intelligence. As researchers, global enterprises, and startups alike converged at GTC, the hottest topic was clearly AI and Machine Learning, with NVIDIA doubling down on its position as an AI company, “Powering the AI Revolution.”

In his keynote, NVIDIA founder and CEO Jensen Huang discussed how the AI Boom is fueling a post-Moore’s Law (or Moore’s Law Squared) demand for GPU compute power. In response, NVIDIA has invested over $3 billion to develop the new Tesla (Volta) V100 accelerator. Built with 21 billion transistors, the Volta V100 delivers deep learning performance equal to 100 CPUs and will support new releases of deep learning frameworks such as Caffe 2, TensorFlow, Microsoft Cognitive Toolkit, and MXNet. The DGX-1 also gets an upgrade with V100 GPUs, selling at $149,000 (want one? Inquire here).


Huang also introduced a DGX workstation featuring 4x V100 GPUs for 480 Teraflops of Tensor computing power with a selling price of $69,000.

Other announcements included a collaboration with Toyota for autonomous driving, Project Holodeck for working in a shared VR environment, but more than anything, it was the signal that NVIDIA has every intention of powering the AI boom, particularly with regards to accelerating Machine Learning.


Along those same lines, AMAX showcased its dedication to providing advanced tools to fast track Deep Learning development while reducing barrier to entry. With launch of “The MATRIX” product line, AMAX combined its award-winning Deep Learning platforms with end-to-end Deep Learning tools as well as GPU virtualization technology. The MATRIX increases GPU utilization, fast tracks AI development and training, facilitates task management, minimizes infrastructure costs all to an unprecedented level.


While the MATRIX is deployed as a turnkey appliance in workstation, server and rackscale clusters, it’s especially beneficial to AI startups and incubators who need a deep learning platform to scale with them. The ultra-quiet MATRIX workstations feature a mini 2-GPU form factor and a 4-GPU form factor, and through the MATRIX software, GPU resources can be aggregated and presented to users as an on-premise GPU cloud for dynamic sharing. What this means is that AI companies can build virtual GPU clusters on demand, using hardware that sits quietly under a desk.

Our Presenter Series featured topics around GPU Virtualization and Cloud Computing for Machine Learning, including how the MATRIX enables AI startups to accelerate Time-to-Market, how to upgrade non-GPU infrastructures to include GPU resources, how to break through current performance limitations for GPU computing, and many more.


We were also honored to be interviewed by to talk about the use cases of the MATRIX.


insideHPC: What are you showcasing at the booth today?

Dr. Rene Meyer (VP of Technology, AMAX): What we are showcasing here is a very interesting solution—a hardware/software solution. We not only present the hardware, but we put a software layer on top, which allows you to virtualize GPUs in those machines.

insideHPC: Can you tell me about some use cases and what problems it solves?

Dr. Rene Meyer: One of the use cases is for enterprise customers who purchased a few racks of hardware, and now that they learn that the software they use supports GPUs, and benefits from acceleration. So what they usually do is rip the old hardware out, and replace them with new hardware featuring GPUs, which can be expensive. Rather than do that, they can add a few blocks of our high-density MATRIX servers, and then use the MATRIX software and virtualize the GPUs and reattach them to the existing cluster. So with the MATRIX, you can turn your existing non-GPU cluster into GPU cluster, with minimal additional hardware and without performing a complete refresh.

insideHPC: Ok Rene. This MATRIX box has been described as groundbreaking. Can you tell me more about it?

Dr. Rene Meyer: The MATRIX offering is an end-to-end solution. It’s not just a very powerful deep learning box, but it also has an integrated software layer for a plug and play solution. The software layer allows you to spin off instances, containers, which are pre-configured with Deep Learning frameworks, like Tensorflow, Caffe, Torch, and so on. So you don’t have to worry about having the IT to configure, set up or load things to make sure you have the latest version and things are working. You can literally, at a click of a button, spin off instances and be ready to go.

insideHPC: So for developers this would be pretty powerful to get stuff out of the way and focus on work. It that the idea?

Dr. Rene Meyer: That’s exactly the idea. You can start off development with one of these boxes. Once you see that you need to upgrade or out scale as you need more power, there are various ways how we can do this. One of the ways is that you buy multiple MATRIX boxes. What the MATRIX software does is through virtualization, you can attach GPUs from one box to another box or combine compute resources dynamically. Therefore, you can build more powerful servers or workstations for your workloads on demand. As you continue to grow, you can purchase more servers or workstations, which can be seamlessly integrated into your growing virtual GPU pool. What’s good for startups is you can grow your computation power significantly without the need to build a data center or rent from a colo, and reduce the time and cost of a traditional infrastructure.

For more information about AMAX’s MATRIX solution or Deep Learning platforms, please contact us!

Posted in AMAX News, Deep Learning, Enterprise Computing, GPU Computing, HPC Computing, Tradeshow/Events, Virtualization | Tagged , , , , , , | Leave a comment

GANs: When Machines vs Machines Aspire to Greatness

aoe-primevsgalvatronIn our last blog post, we touched a little upon the concept of GANs, short for Generative Adversarial Networks. GANs is a relatively new branch of unsupervised machine learning. It was first introduced by Ian Goodfellow in 2014, and has spurred major interests among scientists and researchers with its wide applications and remarkably good results.

To make it sound even more interesting, the concept of GANs was recently described by Yann LeCun, Director of AI Research at Facebook, as the most important development in deep learning, and “the most interesting idea in the last 10 years in ML (Machine Learning).”

A GAN takes two independent networks – one generative and one discriminative – that work separately and act as adversaries. Quite literally, the generative network generates novel synthesized instances, while the discriminative network discriminates between synthesized instances and real ones.


One way to interpret this is through an art investigator and an art forger. The generator, in this case the forger, wants to create, say, a fake Van Gogh painting. He starts by learning what Van Gogh paintings look like, and then imitates with the goal to fool other people. The discriminator, in this case the investigator, starts also by learning the characteristics of Van Gogh in order to recognize what’s fake. Whenever one side loses, either the forger gets caught, or the investigator gets fooled, he works harder to improve. In order to win the battle, both the forger and the investigator train and escalate until both become experts.

Now, imagine expanding on this concept, if machines can soon create masterpieces in art and design, we may be seeing artists on the “Endangered Jobs” list very soon.

Application of GANs

GANs can be applied to multiple scenarios, including image classification, speech recognition, video production, robot behavior generation, etc. One of the most common applications is image generation – more specifically, the generation of “natural” images.

Many of you may have played the “What will you look like when you are old?” game online, or something of that sort, usually just for a laugh. With GANs technology available, scientists have improved the simulation to become much more reliable, and something that one day could be used to help missing person investigations.


In the application, the generative network was trained on 5,000 faces labeled with ages. The machine learns the characteristic signatures of aging, and then apply them to faces to make them look older. The second step of the application takes the discriminative network, and has it compare the original “before” images with the synthesized “after” images to see whether they are the same person.

When pitted against other face-aging techniques, the team using GANs received 60% more successful results of “before” and “after” identifying the same person.

In addition to face recognition, GANs has been proven useful in astronomy research, by a group of Swiss scientists. Up until now, the human ability to observe outer space has been limited by the capabilities of telescopes. However advanced modern telescopes are, scientists are never satisfied with the amount of detail they can show.

In the study, scientists took a space image and deliberately degraded its resolution. Using the degraded image and the original image, scientists trained the GANs to recover the degraded image to the best, and most genuine degree.  Then using the trained GANs, scientists were able to receive a much sharper version of the original image, finding it better able to recover features than anything used to date.


Extensions of GANs

Ever since the concept of GANS was introduced, researchers have focused on how to improve the stability of GANs training. More suitable architectures have been developed to put constraints on the training, and tackle specific image generation tasks.


A CGAN is an extension of the basic GAN with a conditional setting. It works by taking into account external information, such as label, text or another image, to determine specific representation of the generated images. The scary cat drawing we mentioned in the previous blog, and the space image recovery technique are both the results of a CGAN. More experimented applications are:

Text to image:


Image to image:



A LAPGAN is a Laplacian Pyramid of GANs, used to generate high quality samples of natural images. The training of a LAPGAN starts first by breaking the original training task into multiple manageable stages. At each stage, a generative model is trained using a GAN. In other words, a LAPGAN increases the models’ learning capability, by allowing them to be trained sequentially. According to the research paper, LAPGAN-generated images were mistaken for real images around 40% of the time, compared to 10% using a basic GAN.



A DCGAN is short for Deep Convolutional GAN, a more stable set of architecture proposed in a paper published in 2016. It works as a reverse of Convolutional Neural Networks (CNN) while bridging the gap between CNNs for supervised learning and unsupervised learning. In the paper, researchers predicted promising extensions of the DCGAN framework into domains such as video frame prediction and speech synthesis.



InfoGAN is an information-theoretic extension to the Generative Adversarial Network. It’s been proven to be able to learn by maximizing the mutual information between a small subset of the latent variables and the observation. Real life applications are concepts of brightness, rotation and width of an object, and even hairstyles and expression on human faces.

Challenges of GANs

GANs have attracted major attention within the academic field since their advent three years ago. Near the end of last year, Apple published its very first AI paper, announcing its efforts in algorithm training using GANs.

In addition to the aforementioned extensions, more variations of GANs are being studied to further implement the model, as well as to tackle its shortcomings, including the difficulty and instability in the training process, as mentioned in detail by Ian Goodfellow in his answer on Quora.


As researchers continue developing advancements to the GAN models and scaling up the training, we can expect to see fairly accurate and realistic machine-generated samples of videos, images, text, interactions, etc in the very near future. Which begs the question…if we see machines being pitted against each other in a manner that gives them human-like abilities to mimic and validate, would this mean that at some point, they will not only have the ability to reflect the world to us, but also have a hand in creating it, too?

If you’re ready to build your GANs and need the most powerful machine learning engines in the world, please visit


Posted in AMAX Services, Deep Learning | Tagged , , , , , , | Leave a comment