close
test_template

Spiking Neural Network and Facial Expression Recognition

Human-Written
download print

About this sample

About this sample

close
Human-Written

Words: 2296 |

Pages: 5|

12 min read

Published: Apr 11, 2019

Words: 2296|Pages: 5|12 min read

Published: Apr 11, 2019

Introduction

Spiking neural network is considered one of the best neural networks nowadays with its computational model aims to understand and replicate human abilities. Replicating a special class of artificial neural network where neuron models communicate by sequences of spikes, the researcher believe that this technique is the best for the face recognition, facial expression recognition or emotion detection.

The work of C. Du, Y. Nan, R.Yan (2017) proved it. Their paper proposed a network architecture of spiking neural network for face recognition, this network consist of three parts: feature extraction, encoding and classification. For feature extraction they used HMAX model with four layers to extract facial features and then encoding all the features to suitable spike trains and Tempotron learning rule was used for less computation. They used four databases in the experiment: Yale, Extend yale B, ORL, and FERET.

The study of A. Taherkhani (2018) tackled the challenging task of training population of spiking neurons in a multi-layer network for a precise time and the delay learning of the SNN have not been carried out thoroughly. The paper proposed a biologically plausible supervised learning algorithm for learning more precise timed multiple spikes in multi-layered spiking neural network. It trains the SNN through the synergy between delay learning and weight.

The proposed method shows that it can achieve a higher accuracy compared to single-layered spiking neural network. The result shows that high number of desired spikes can reduce the accuracy of the method. He also said that it is possible to extend the algorithm to more layers. However, most layer may reduce the effect of training of earlier layers on the output. The researcher wants to improve the algorithm in terms of performance and computation.

The paper of Q. Fu et.al (2017), which improve the learning algorithm performance of Spiking neural network. It proposes three methods to improve the learning algorithm of Spiking neural network it includes the back propagation of inertia term, adaptive learning and the method of changing measure function. In all four methods including the original algorithm that was also used, the result shows that the adaptive learning has the higher accuracy rate that had 90%, it also shows that the original algorithm has the lowest accuracy rate therefore the three methods that was proposed in the paper achieved better performance than the original algorithm.

Facial Expression Recognition

When we say “Facial Expression” in the research field, great researchers think about P. Ekman and his books about emotion based on the person’s facial expression. In his book “Unmasking The Face” together with W.V. Friesen, They study about facial expression and how to identify emotion based on facial expression. They show photographic photos each of the six emotions: happiness, sadness, surprise, fear anger and disgust. The question is: Are there universal expressions of emotion? When someone is angry, will we the same expression regardless of their culture, race or language?

Paknikar (2008) defines a person's face as the mirror of our mind. The facial expression and the changes of it provides us an important information about state, truthfulness temperament and personality of the person. He also added that nowadays terror activities are growing all over the world and the detection of the potential troublemakers is a major problem, That's why body language, facial expression and the tone of speech are the best ways to know the personality of a person. According to Husak (2017), facial expression are important factor in observing human behavior, he also introduced the quick facial motion that are appearing in a stressful situations that is typically when a person tries to conceal his or her emotion called "Micro-expressions".

In the study of Kabani S., Khan O., Khan and Tadvi (2015), they categorized facial expressions in 5 different types like joy, anger, sad, surprise, and excitement. They also used an emotion model that identifies a song based on any 7 types of emotion; joy-surprise, joy-excitement, joy-anger, viz sad, anger, joy and sad-anger. Hu (2017) said that efficiency and accuracy is the two major problems in facial expression recognition. Time complexity, computational complexity and space complexity is used to measured efficiency, however in measuring accuracy there is a high space complexity or computational complexity. They also added there are few other factors that can affect accuracy such as pose, low resolution, subjectivity, scale and identification of baseline frame.

Another pointers for emotion detecting that Noroozi et. al (2018) studied about are the body language that can affect the emotional state of a human being, They include facial expression, body posture, gestures and eye movements in body language, These are an important marker for emotion detecting. The group of Yaofu,Yang and Kuai (2012) used a Spiking Neuron Model for facial expression recognition which utilizes information that represent as trains of spikes. They also added that the main advantage of this model is computationally inexpensive. They also had an experiment in which they showed a graphical representation six universal expression; joy, anger, sad, surprise, and excitement plus one neutral expression. Note that the subjects have a similar facial expression but all of them are racially different and each of them has a variant of expression intensity. After the experiment they found that in all six expressions, the happy and surprise expression are easier to recognize while the fear expression is the most difficult one.

In the research of Wi kiat, Tay (2017), they used emotion analytics solution through computer vision to recognize facial expression automatically using live video. They also studied the anxiety and depression considering these two are included in emotion. They have their own hypotheses that “anxiety” is a subset of the emotion “Fear”. According to S.W. Chew (September 2013) and his study about the Recognizing Facial Expression , Said that an automatic facial expression recognition system contains three fundamental component; Face detection and tracking , Mapping of signals for more distinct features and Classifying the unique patterns of the features.

The paper of N. Sarode and S. Bhatia (2010) which is the “Facial Expression Recognition”, In their research, They study about the facial expression as it’s the best way of detecting the emotion. They also used a method which is the 2D appearance-based local approach for the facial feature extraction, Radial symmetry transform for the algorithm base and also creates a dynamic spatio temporal representation of the face. Overall, the algorithm achieves 81.0% of robustness.

For facial images and databases, the work of J.L. Kiruba and A.D Andrushia (2013) which is “Performance analysis on learning algorithm with various facial expression on spiking neural network” Upon using Spiking Neural Network for their research, they also use and compare two Facial Image database the first one is the JAFFE database which contains 213 images of 7 facial expressions posed by 10 Japanese women while the other one is MPI database which contain various emotional and conversational expressions, The database contains 55 different facial expressions.

Finally the result, the JAFFE Database has the overall highest recognition rate compared to MPI database. The research of Y. Liu and Y. Chen (2012) stated that automatic facial expression recognition is an interesting and challenging problem. Deriving features from raw facial image is the vital step of successful approach. In their system they proposed the combine features which is Convolutional Neural Network and Centralized binary pattern and then they classified all of it using Support machine vector. They also practiced two datasets: Extended Cohn-Kanade dataset which achieved 97.6% of accuracy and JAFFE database with 88.7% accuracy rate with the help of CNN-CBP. M.B. Mariappan, M. Suk and B. Prabhakaran (December 2012) created a multimedia content recommendation system based on the users facial expression. The system called “FaceFetch” that understands the current emotional state of the user (Happiness, Anger, Sad, Disgust Fear and Surprise) through facial expression recognition and generates or recommends multimedia content to the user such as music, movies and other video that might interest the user from the cloud with almost real time performance. They used ProASM feature extractor which resulted for better accuracy, faster and more robust. The app receives very good response from all the user who tested the system.

The technique that was proposed and used by T. Matlovic, P. Gaspar, R. Moro, J. Simko and M. Bielikova (October 2016) was using facial expression and Electroencephalography for emotion detection, First what they did is to analyze tools that are existing that employs facial expression recognitions for emotion detection. Second they proposed a method of emotion detection using Electroencephalography (EEG) that employs existing machine learning approaches. They’ve gathered an experiment we’re they get participants to watch emotion-evoking music videos. Their Emotive Epoch is to get the brain activity of the participants which achieved 53% of accuracy in classifying emotion. He also said that the potential of automatic emotion based music is far-reaching because it provides deeper understanding of human emotion.

Patel et. Al (2012) described music as the “Language of emotion”, they also give an example where there is a 80 year old man and 12 year old girl, different generations, different taste of music but same result of emotion after listening to a music like they can be both happy after listening to it but they listen to different generation of music. Their system aimed to provide music lover’s need by the used of facial recognition and saving time browsing and searching from a music player. P. Oliveira (2013) studies the musical system for emotional expression, His aim is to find a computational system for controlling the emotional content of a music, so that it gives a specific emotion. He also added that it must be flexible, scalable and independent from musical style. He defines flexible in different levels: Segmentation, selection, classification, and transformation. He also added that the scalability of the system must allow the production of the music to be unique.

Jha et. al (2015) created a facial expression based music player system by providing an interactive way of and carrying out and creating a playlist through the emotion of the user. The system also used a facial detection where it process the facial expression of the user then classifies the input which is the facial features and generates an output which is an emotion based on the facial expression extracted from realtime graphical event. Then they classified the emotion into an input and processes appropriate music or playlist which is the output. Mood-Based On-car music recommendation studied by Cano (2015), focuses on mood, music and safety driving of the user. He also pointed out three definitions of mood based on a psychiatrist work: Affect, Emotional episode, and mood. Affect described as neurophysiological state that is accessible as a primitive non-reflective behavior but always available to consciousness. Emotional episode defined as a set of inter related sub-events particularly with an object. And mood described as the designation of affective states about the world in general.

In the research of M.Rumiantcev (2017) that studies Emotion-Driven recommendation system said that people encountered the problem of music choice. He also studies the demonstration of feasibility of the emotional based music recommendation where manages human emotion by delivering music playlist based on your last personal listening experience. K.Monteith (2012) present a system that will generate unique music based on the user’s emotion using n-gram models and hidden markov models. They also created a separate sets of music based on the desired emotion matching the six basic emotion which is love, joy, anger, sadness, surprise and fear creating a data set representative of each.

The work of R. Madhok, S.Goel and S.Garg (2018) proposed a framework that is to generate a music based on the user’s emotion predicted by their model which is divided by two: Image classification model and Music generation model. The music would be generated by LSTM or Long-short term model structure. The inputs used are the 7 major emotions: anger, sadness, disgust, surprise, fear, happiness and neutral using Convolutional neural network. Finally, they used Mean opinion score (MOS) for evaluating the performance.

The paper of K.S. Nathan, M.Arun and M.S. Kannan (2017) proposed to design an accurate algorithm for generating a list of songs based on the user’s emotional state. They practiced four algorithms; SVM, Random forest, K-Nearest Neighbors and A neural network. The four algorithms applied in Mean square error and R2. The result shows that among the four algorithms SVM has the least Mean square error and the highest R2 score making it the best algorithm in terms of performance and regression.

The work of S. Gilda, H. Zafar, C. Soni and K. Waghurdekar (2017) proposed a system for music recommendation based on your facial emotion. The music player contains three modules: Emotion, music classification and recommendation module. The Emotion module takes the photo of the user as an input and used a deep learning algorithm to identify the mood with the accuracy of 90.23%. The Music classification module achieved a remarkable result of 97.69% while classifying the songs into 4 different mood classes. The Recommendation module recommends songs to the user by taking the user’s emotion and mapping it to the mood type of the song. Besides Facial expression there is one factor for identifying emotion and that is our speech or voice.

Get a custom paper now from our expert writers.

The work of S. Lukose and S.S Upadhya (2017) create a music player based in the emotion of your voice signals using the Speech recognition system (SER). It includes the speech processing by the use of Berlin emotional database, then extracting the features and classifying methods to identify the emotional states of the person. Once the emotion of the speaker recognize, the system automatically pick music from the database of music playlist. The result shows the SER system that implemented over five emotions which is anger, anxiety, boredom, happiness and sadness achieved a successful emotional classification rate of 76.31% using Gaussian Mixture Model and overall best accuracy of 81.57% by the use of SVM model.

Image of Dr. Oliver Johnson
This essay was reviewed by
Dr. Oliver Johnson

Cite this Essay

Spiking Neural Network And Facial Expression Recognition. (2019, April 10). GradesFixer. Retrieved November 19, 2024, from https://gradesfixer.com/free-essay-examples/spiking-neural-network-and-facial-expression-recognition/
“Spiking Neural Network And Facial Expression Recognition.” GradesFixer, 10 Apr. 2019, gradesfixer.com/free-essay-examples/spiking-neural-network-and-facial-expression-recognition/
Spiking Neural Network And Facial Expression Recognition. [online]. Available at: <https://gradesfixer.com/free-essay-examples/spiking-neural-network-and-facial-expression-recognition/> [Accessed 19 Nov. 2024].
Spiking Neural Network And Facial Expression Recognition [Internet]. GradesFixer. 2019 Apr 10 [cited 2024 Nov 19]. Available from: https://gradesfixer.com/free-essay-examples/spiking-neural-network-and-facial-expression-recognition/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now