Saturday, December 01, 2018

AI/ML/cloud: Amazon Elastic Inference + AWS Inferentia

"AI co-processor"

Amazon Elastic Inference - Amazon Web Services

"Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75%. Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon."

Amazon Elastic Inference supports TensorFlow and Apache MXNet models, with additional frameworks coming soon.

tensorflow_logo_200px

No comments: