Tip #3: Start With Low-Hanging Fruit
In the “productionized” forms of ML/AI mentioned above, specifically on AWS, there are two main classes of solution that are available. These differ in the amount of, and type of, expertise needed to use.
The first class that I refer to as low-hanging fruit, allow individuals without ML expertise, and without large amounts of training data to nevertheless inject intelligence into existing legacy applications. These take the form of AI services (RESTful APIs) that already contain pre-trained ML models for common needs. AWS manages AI services including scaling, re-training while your application developers focus on connecting the legacy applications to the services for classifications and predictions.
There are several AWS AI services that can support Natural Language Understanding (NLU) on textual and document-oriented data. This includes Amazon Comprehend, Amazon Textract, Amazon Transcribe, Amazon Translate. Other services include Text-to-Speech service Amazon Polly, voice/text Chatbot service Amazon Lex; time series predictions with Amazon Forecast; a service like Amazon Personalize for predictions from real-time user activity (clicks, page views, signups, purchases) used identify the right product recommendations; and Video and Image analysis services Amazon Rekognition Image/Video; among others.
If you do have abundant training data and wish to train your own model, there is yet another low-hanging fruit. Amazon SageMaker is an alternative to taking on the work of deploying a bunch of ML servers (e.g. using DL AMIs on EC2). SageMaker is a fully managed service for ML that provides a host of “out-of-the-box” benefits like automation model training, automated hosting of a trained model for production inferencing, a long list of built-in algorithms, decoupling of training compute from inferencing compute, right-sized compute for the job, and auto-scaling of inferencing endpoints, among many other features. All of this without the need to configure, deploy, operate and scale a fleet of EC2 servers.