Although artificial intelligence has been around since the 1950s yet it is definitely not late to get started…no matter if you are a developer or an enterprise manager. Actually, given the (major) developments in the past few years, you couldn’t be in a better time to get started in Artificial Intelligence (AI). AI is redefining the experiences humans have with machines and enhancing even richer experiences for end users and entities alike.
This entry is a follow-up to the talk I delivered during the Google Cloud Day in Malta on the 26th July 2018.
Below, I will first set the context and then outline a selection of latest developments that should motivate you to get started in artificial intelligence. At the end, there’s a list of simple steps you might be interested in trying out.
Let’s start with a quick recap of the main terminology in AI.
- Artificial Intelligence (AI): The science of making machines/computers intelligent
- Machine Learning (ML): An approach to achieve artificial intelligence by having the machine learn from a set of predefined labelled data by finding patterns in this data.
- Deep Learning (DL): A machine learning approach that uses a deep structure such as a Deep Neural Network with a number of hidden layers that will be able to find patterns in patterns of data.
Machine Learning as a paradigm shift in programming
Just like every paradigm shift explored in my previous entry, ML brings about a spectacular and motivating approach to the way we face problems.
“Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything” — Sundar Pichai CEO, Google
So, what does this mean in practice? Traditionally, programs were designed in a very simple way: First analyse your domain, create a set of rules by which the domain related data will be processed and then you will get an output.
Machine learning changes the order of this approach. In a ML program, you first feed a labeled dataset for training. In other words, this means that you would have a sample set of input and a corresponding set of output where every input would be linked to an expected output. During training, the algorithm finds patterns within these relationships and comes out with rules that are also known as models. When a new input is fed, it is processed with respect to the the trained model and an output is predicted with a particular level of confidence. This level of confidence is directly related to the training given in the first instance: the bigger the training set, the higher the confidence in the final results. (PS: this specific approach is known as supervised learning.)
Another interesting approach of machine learning is reinforcement learning. This approach is designed to have the machine assess its previous actions and modify future actions by the experience gathered till that point. Consider the Arkanoid game demo given by DeepMind to demonstrate their Q-learning algorithm. The computer is given the pixel values of every game frame as input. The algorithm checks this against the score and learns which series of frames led to an increase in points and which did not. After few minutes, the results are just random and there is no confidence in performance. After 2 hours of training, the algorithm is able to flawlessly play the game. The interesting part comes when after 4 hours of training, the algorithm finds a better strategy to win the game as demonstrated in the animated gif below. It discovers that it achieves more points, faster, when the side is broken and the ball is bounced on top.
The official publication by the Deepmind team is named “Human-level control through deep reinforcement learning”.
There is also this GitHub Project if you wish to try it out.
When AI is combined with the Internet of Things, applications can also be extremely interesting (and probably scary for few). During the 2018 Google IO in Mountain View, I saw this awesome demo that brings Android Things and TensorFlow that renders a cool application of reinforcement learning: Rock/Paper/Scissors.
If you’re not freaking out and wish to learn more about how to get started, I invite you to the next section of this entry.
Google as your partner in AI
By now (I hope) you are probably excited and want to start building your own AI. My personal choice of inspiration and resources is Google. No, not just as a search engine that answers the dumbest of my questions, but the company behind it.
One key challenge in modern ML implementations is their significant hardware requirement. This generally means that the machines needed to run these programs are not easily available or affordable since they require at least one high-performing GPU and the hardware it needs to support it. To address this challenges that keeps many from venturing into AI, Google provided two cloud-powered options:
- Google CoLab: This is a is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. Just apply, get access and make use of GPU on the cloud to run your Python scripts.
- Cloud TPU: This is custom designed hardware by Google and specifically designed to perform tensor operations (e.g. matrix multiplication) more efficiently than a CPU or GPU.
If you’re new to the idea of a TPU, I strongly recommend watching TensorFlow’s product manager, Zak Stone’s, talk at IO18 titled “Effective machine learning using Cloud TPUs”.
Just in case you are wondering about the TPUs actual performance and not taking neither Google’s nor my word, you may want to have a look at this carefully executed benchmark experiment by RiseML.
During the IO17 conference, Google announced that it is stepping up its efforts in AI. I vividly remember the comparison to the the recent priority of companies a mobile-first strategy. During his keynote, Sundar explained how the priority is now AI-first. When you think about it, this really means a lot and AI can change/revolutionise/save any enterprise, today.
Google’s contribution towards AI did not start there. In late 2015, Google released TensorFlow as an open-source project and it redefined the way we tackle AI. Today, TensorFlow is the most popular ML project on GitHub. If you are just getting started and exploring TensorFlow from a higher level, you might prefer the official TensorFlow website. At the end of this entry I am providing more links and resources that will help you get started.
Google Cloud Platform
The use of the cloud is not only limited to borrowed computing and storage power. The GCP offers a selection of AI products that can take you from zero to one in AI. It is ultimately up to you on how you pace your progress.
Ready to go
The first and easiest way to use the power of AI on the Google Cloud Platform is to access the off the shelf APIs. These are ready made services that one can just access from the GCP. These APIs make use of Google’s ML models built with Google’s own data. This means that you would not have any control on how it is working and why it is returning particular results. This could mean that while these techniques return a good general result, they might not necessarily scale to your domain…and it is very probable that this will happen. The target market for these APIs are app developers.
If the off the shelf solutions are not good enough for you and you have enough data to build and train your own models, then there is an option that allows you to do the real thing. You can deploy an application that uses TensorFlow on the Cloud ML Engine and train your own models with your own data while making use of the specialised hardware offered on the GCP. The target market for this feature are data scientists and ML practitioners.
Somewhere in between
Finally, if you neither want to make use of off-the-shelf ML solutions nor you want to get into the trouble of designing and building your own TensorFlow cloud application, Google invented something to meet you half way. This is known as AutoML. This cloud based tool allows you to upload a labelled dataset and in the background, it finds the best ML architecture that suits your needs and uses it to build a model based on the data you fed into AutoML. This is exactly the case of meeting halfway between the two options mentioned above. It is more flexible than the ML APIs but on the other hand it does not provide you with all the control a TensorFlow application would give you.
Wrapping up just to get started
The way forward can be easily broken down into these easy steps:
- Get to know more about it
- Follow the official YouTube TensorFlow channel and stay tuned.
- Attend GDG AI Evenings such as the awesome one about the latest TensorFlow delivered by my colleague Mark Bugeja.
2. Try off the shelf solutions
- My favourite has to be the Google Vision API: https://cloud.google.com/vision/
3. Try AutoML if you have a dataset
4. Experiment with TensorFlow
5. Equip your team
- Find an online course to try it out: Udemy, Coursera, Udacity
- Or a formal qualification such as the Masters of Science in AI at the University of Malta.
This has been a very brief introduction to a very vast area that is impacting the way businesses function and the way we are delivering even more meaningful experiences to our users. All you have to do today is simple, get started.
Source link https://uxplanet.org/getting-started-in-ai-2018-3db33d54e784?source=rss—-819cc2aaeee0—4