By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 669 |
Page: 1|
4 min read
Published: Jan 29, 2019
Words: 669|Page: 1|4 min read
Published: Jan 29, 2019
Artificial Intelligence (AI) is the theory and development of computer systems that are able to perform tasks, that traditionally have required human intelligence. AI is a very vast field, in which ‘machine learning’ is a subdomain. Machine learning can be described as a method of designing a sequence of actions to solve a problem, known as algorithms, which automatically optimise through experience and with limited or no human arbitration. These methods can be used to find patterns in large sets of data (big data analytics) from increasingly diverse and innovative sources. The figure below provides an overview.
Since an early flush of optimism in the 1950s, smaller subsets of artificial intelligence – the first machine learning, then deep learning, a subset of machine learning – have created ever larger disruptions. The easiest way to think of their relationship is to visualize them as concentric circles with AI the idea that came first – the largest, then machine learning – which blossomed later, and finally deep learning – which is driving today’s AI explosion – fitting inside both. Since an early flush of optimism in the 1950s, smaller subsets of artificial intelligence – the first machine learning, then deep learning, a subset of machine learning – have created ever larger disruptions.
Machine Learning, at its most basic, is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task. Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks among others. One of the very best applications areas for machine learning for many years was Computer Vision though it still required a great deal of hand-coding to get the job done.
People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all these hand-coded classifiers, they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign. Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error. Time, and the right learning algorithms made all the difference. Another algorithmic approach from the early machine – learning crowd, Artificial Neural Networks, came and mostly went over the decades. Neural networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons.
But, unlike a brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation. You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its tasks, and so on, until, the final layer and the final output is produced. Each neuron assigns a weighting to its input – how correct or incorrect it is relative to the task of being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign are chopped up and “examined” by the neurons – its octagonal shape, its fire-engine red color, its distinctive letters, its traffic sign size, and its motion or lack thereof.
Browse our vast selection of original essay samples, each expertly formatted and styled