Amazon Elastic Inference - Amazon Web Services
"Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75%. Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon."
Amazon Elastic Inference supports TensorFlow and Apache MXNet models, with additional frameworks coming soon.
"Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75%. Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon."
AWS re:Invent 2018 Keynote - Andy Jassy - YouTube
AWS Inferentia - Amazon Web Services (AWS)
High performance machine learning inference chip, custom designed by AWS
This is similar to Google's Tensor processing unit
"A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Googlespecifically for neural network machine learning"
Cloud TPUs - ML accelerators for TensorFlow | Cloud TPU | Google Cloud
AWS Inferentia - Amazon Web Services (AWS)
High performance machine learning inference chip, custom designed by AWS
This is similar to Google's Tensor processing unit
"A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Googlespecifically for neural network machine learning"
Cloud TPUs - ML accelerators for TensorFlow | Cloud TPU | Google Cloud
No comments:
Post a Comment