FacePlay was created as a final project in my Senior Design class (CSE 379) at Lehigh University. With this project, my teammate and I were free to develop an application of any nature. We chose to go with something that made a user's experience during music play easier. FacePlay is an android application that suggests playlists based on user input. It gives suggested playlists of songs so that users do not have to spend time making playlists or waste time scrolling through their music library looking for that handful of songs to accommodate their mood at the moment. The project was advised by Xiaolei Huang.
A FacePlay user is any person who wants to sort music more efficiently. The application is aimed at anyone who listens to music and gets frustrated while creating playlists. FacePlay can be used when driving, walking, and during other activities where the user's full attention should be elsewhere. On a smaller scale, this application serves well in cases where users are moody and simply do not want to take time out to find appropriate songs to listen to at the moment.
Duration: 1 Semester
Team Size: A team of two B.S. Computer Science in Engineering students.
Skills Exercised: Android Programming
I was responsible for designing the poster and mocking up the app layout. The coding was completed by both my teammate and I.
Have you ever wanted to tell your phone just what to play?
Most of the time when listening to music, we as users come across those moments when we have to press the “next” button more than 5 times just to get to one song that we are in the mood to listen to and maybe right after that song ends, we have to press “next” a couple more times to get another song to enjoy. This becomes annoying and might even throw a person out of the music listening mood all together. FacePlay aims to incorporate user input in order to create user playlists and ultimately counteract this problem.
Too busy to find the right song?
FacePlay is important because it takes away a lot of the frustration that comes along with not being able to find the right songs and the right moments. FacePlay will make it a little easier for users to live in the moment. Whether it is used for the romantic drive home after date night or for a quick run in the gym, FacePlay’s aim is to make a users listening experience more enjoyable by facilitating music selections.
Scope To Meet Customer Requirements:
- Fully functioning music player for a mobile android device.
- Preset database of songs
- Sorting by either the name of the artist, the name of the song, or the genre of the song
- Drop down menu with different activities and moods for the user to select from
- Collection of songs that are appropriate for these activities or to match the user’s mood
DESIRED pROJECT DELIVERABLES
- Voice recognition Component
- Example: if the user says an emotion, such as “happy”, or an activity, such as “gym”, the corresponding playlist will be given.
- FacePlay may be able to access the Google Play Music Store
- the user can purchase music to add to the pre-made database of songs that comes with the application
- Facial Recognition Component
- use facial recognition to also read the emotion expressed by the user
- playlist will be given based on the facial expression reading suggested songs will match the emotion expressed
The application was built in 6 components which include: a basic music player with general controls such as play, pause, next, previous, etc., a drop down menu consisting of different emotions and activities, setting up music playlists in folders based on the predefined emotions and activities, connecting and linking the music folders to the user input from drop down menu in order to show appropriate output of songs. These components combined make up the required deliverables. In regards to the desired deliverables: voice recognition implementation, Google Play Store Access, and facial expression recognition implementation, we completed the voice recognition implementation and the Google Play Store Access. We hope to start and complete the facial expression component in our next steps.
- Customer Requirements – Minimum Scope
- Fully functioning music player
- 6 preset playlists, 10 songs minimum-database
- Happy, Sad, Love, Going to the Gym, Going to Sleep, Party it Up
- Drop down menu to choose type of playlist
- Each playlist holds songs to match action or mood selected
- Functionalities of the Application
- Opening of the application
- Navigating pages
- Operational buttons
- Drop Down Menu Functional
- Accurate Playlist Display
- Accurate Song Play
- File Connections
How the Components Work Together:
- FacePlays suggests music playlists based on what users are doing at the time or how users feel at that moment.
- Users can manually select an emotion or activity from the drop down.
- Users can say verbalize an emotion or activity.
- How does the player know that the songs in each playlist are correct for that mood or action?
Using outside sources, we have added songs that were deemed fit for each activity or mood. As noted above, you can also add your music to playlist you think you would want to listen to while feeling either of the 3 emotions or doing any of the 3 activities.
2. Does the voice recognition component only recognize words from the preset playlist?
The voice recognition component recognizes any words but is only currently linked to the 6 playlists that you see in the dropdown menu. In regards to the activities, you do not have to say the full activity; you can say “sleep” instead of “going to sleep”, “gym” instead of “going to the gym”, etc.
3. Where do the songs that I purchase from the Google store go?
The songs will then be added to the “All Music” playlist within FacePlay.
4. Can I sort the songs that I bought?
You can currently sort the songs that you bought by copying and pasting the songs into the playlist that you want it in, after it has finished being downloaded from the Google Play Store. In the future a more convenient interface will be created for this task.
5. Can I create my own playlist and add my songs to it?
You cannot create your own playlist outside of the 6 provided right now. In the future you will be able to accomplish this task.
The ultimate goal of FacePlay is to combine a user’s facial expression with the concept of suggested playlists to make the ultimate music application. Due to time constraints, we were only able to complete the background research for incorporating the facial expression recognition component. On the next iteration, this project will be able to use facial recognition to read the emotion expressed by the user. Depending on the facial expression read, a playlist of suggested songs matching the emotion expressed will be proposed to the user.
Click here to view the User Documentation and System Documentation for FacePlay.