B. TECH. PROPOSAL SEMINAR
A MOBILE APPLICATION THAT TEACHES ALPHABETS
DANIEL SAMUEL ESHOWOGAFOR
FEDERAL UNIVERSITY OF TECHNOLOGY, AKURECOMPUTER SCIENCE DEPARTMENT
SUPERVISOR: DR. O. AYENI23RD OF MARCH, 2018
PROJECT TOPIC: A MOBILE APPLICATION THAT TEACHES ALPHABETS
BY: DANIEL SAMUEL ESHOWOGAFOR (CSC/13/4995)
BACKGROUND OF THE STUDY
With the technological advancement of the present age, generally known as the ‘jet age’, the rate at which children are beginning to use sophisticated devices mostly to play mobile games, is alarming. These kids, also tend to learn and copy traits from these video and graphics contents, whilst they lag in their school work. To children, the classroom learning method is usually boring and kids find it hard to grasp during this learning process. This presents a question as to why kids learn faster through viewing of video and graphics contents than through the traditional classroom learning method. This question is clearly answered by Sweeney et al. (2012). The research showed that the use of a variety of media applications to explain concepts increases understanding and supports greater collaboration amongst students.
Merriam Webster dictionary defines learning as an act of gaining knowledge or understanding of or skill in by study, instruction, or experience. Learning in children is an area of utmost importance, especially in building a society that is free from drugs, crime and other evil vices, as a Nigerian rhyme sings, “We (children) are the leaders of tomorrow”. Lots of researches have therefore been carried out to find out new ways to stimulate learning in children. The importance of learning in children cannot be over-emphasized.
Traditional learning of the English alphabet is often through singing. Whilst singing can be an interest-stimulating learning approach, a major downside to it is that students tend to memorize the order of the alphabet. And most times, when asked to name a single alphabet, students tend to sing the alphabet from the beginning till they came to the alphabet in question. With the advent of Information and Communications Technology (ICT) however, the past decades have seen the adoption of ICT in technology-based learning whereby digital format of teaching and learning materials have been widely used. Learning of the alphabets is not an exception. Several educational and edutainment software have been created for learning the alphabet.
Past research works have shown that children tend to learn faster through use of graphic contents, which could include pictures, videos and animations or any combination of them. In education, teaching and learning methods is one of the most important factors. In many cases, teaching methods could create difficulty or barrier in learning and understanding of students, especially, for young children. The aspect of fun is therefore a very important element of learning in children, as Plato stated, “Do not train children to learn by force and harshness, but direct them to it by what amuses their minds, so that you may be better able to discover with accuracy the peculiar bent of the genius of each”. When children play, they will participate, think, discover ideas and gain the experiences they are exposed to. Anne Haas Dyson (2013) mentioned that play is where literacy and learning begins. Introducing play and fun into education will therefore ease the learning process and promote participation by students. Bhadra et al. (2016) reveals that one potential way to increase reading performance and comprehension is to get children interested in reading. This can be done by developing educational games, educational videos and incorporating fun-learning platforms like Augmented Reality systems.
Augmented Reality (AR) is the overlaying of a virtual environment on the physical environment. As opposed to virtual reality (VR), where the real world is entirely replaced by a virtual one, AR allows the user to interact with virtual images using real objects in a seamless way. A widely accepted definition of AR is any system which combines real and virtual world, is interactive in real time, and is registered in 3D. Here, the term registration means the accurate alignment of real and virtual objects with respect to each other. With accurate registration, the appearance of the coexistence of virtual elements in the real environment with physical objects would be severely compromised. AR is a new technology that emerged with potential for application in education. But while a lot of research has been conducted on AR, very few studies have focused on the educational field. Yeome (2011) used AR 3D Anatomy pictures and haptic feedback to teach and test anatomy knowledge of the abdomen. Cerqueira et al. (2012) used a head-mounted display and personal interaction panel to teach geometry through the use of 3D geometrical concepts. Martin et al. (2011) developed a mobile AR educational game and used it gather information and enhance the experience of visitors to cultural organizations (museums and archaeological sites).
Animation is also a very useful tool in stimulating interest in children. Animation is the technique of making inanimate objects or drawings appear to move in motion pictures or computer graphics. There are two types of animation. They are two-dimensional (2D) animation and three-dimensional (3D) animation. While 2D animation is attractive and fun-inducing, 3D animation is easier to make and produces better result. Farhah et al. (2015) showed that animation, virtual environments and simulation are visualization technologies which have been examined in previous researches as a way to promote fun learning and comprehension in young children. Robertson et al. (2008) also found that animation together with interesting data and an engaging presenter helps the audience understand the results of an analysis of information. These visualizations can be used to address the problem of lack of retention and help students learn better. Combining AR and 3D animation in education of children will therefore go a long way in stimulating interest to learn.
The main purpose of this project is to create an interactive fun-learning platform that utilizes AR, to teach children between ages 3 and 5 the English alphabet, and to discover how augmented reality affects students’ academic performance. The developed application can be used as a classroom learning tool to complement traditional learning methods. It can also be used as a study-material by kids while at home.
Wickramasinghe et al (2015) developed a mobile application to enhance Sinhala alphabet learning experience of children from Sri Lanka, using Augmented Reality approach. This system was however limited to the employment of static 3D models. Ahmad Azri (2014) developed a prototype of augmented reality application to learn Jawi alphabets. Objects used in this project were limited to 3D words and 2D pictures, and the application was based on Windows Operating System. Rasslenda-Raus Rasalingam et al (2014) also developed an AR application, to study the effectiveness of using augmented reality technology in early childhood education. The application however was made for devices that run IOS alone, and employed only static (non-animated) 3D virtual objects. From the related works reviewed above, there is the need to incorporate into alphabet learning applications, 3D animation and sound effects to stimulate fun and interest to learn.
The main objectives of this research are:
To design an interactive alphabet teaching application for children.
To implement (a) above.
To evaluate the perception of the audience towards augmented reality learning.
To determine how augmented reality affects students’ academic performance.
This project is proposed in order to enhance the alphabet learning in children. The project entails designing a simple mobile application to attract children to play, and drive them to learn. The solution will be run on Android Operating System since it is the most commonly used Operating System in Nigeria, and most accessible to the targeted users.
A review of related research works will be carried out, so as to find out areas of improvement. The augmented reality mobile application will then be developed. With ABC tutor:
Users can scan a target image
A 3D visualization of the object matching the scanned target image pops up, with which users can interact in real time.
Next, a research survey on students between ages 3 and 5 will be carried out, to determine how the learning app affects learning, and to get students’ perception of the app.
4660752014220Figure 1.1: Process of Mobile Augmented Reality
00Figure 1.1: Process of Mobile Augmented Reality
The diagram below shows how the learning app works:
C# programming language will be used for implementing application logic. C# language is chosen because it supports Object Oriented Programming (OOP), which will help create codes that are easy to execute and debug. Unity 3D game engine will be used as the development environment. Autodesk Maya will be used to create and animate 3D models. Autodesk Maya is the international standard software for creating 3D animations. Vuforia SDK of Qualcomm will be added as a library for tracking and recognizing target images.
A research would be carried out on 20 nursery students from age 3 to 5 from different schools to evaluate the perception of the audience towards augmented reality learning and to determine how augmented reality affects students’ learning performance.
CONTRIBUTION TO KNOWLEDGE
A learning mobile application that teaches the English alphabet whilst incorporating fun-learning and stimulating interest in children would have been developed.
A mobile application is a computer program designed to run on a mobile device such as a phone/tablet or watch. According to Technopedia, “A mobile application, most commonly referred to as an app, is a type of application software designed to run on mobile devices, such as a smartphone or tablet computer. Mobile applications frequently serve to provide users with similar services to those accessed on personal computers”. Mobile applications are different from desktop applications which run on desktop computers, and web applications which run in mobile web browsers rather than directly on the mobile device.
The first mobile phone was produced in April, 1973 by Martin Cooper of Motorola Inc., while the first smart phone was announced for general use by IBM in 1993 and was equipped with features like calculator, world clock and contact book.
Mobile applications were originally offered for general productivity and information retrieval, including email, calendar, contacts and currency converter. However, public demand and the availability of developer tools drove rapid expansion into the categories such as those handled by desktop application software packages.
Most mobile devices are sold with several applications bundled as pre-installed software, such as web browser, email client, calendar and calculator. Some pre-installed applications can be removed by an ordinary uninstall process, thus leaving more storage space for desired ones. Where the software does not allow this, some devices can be rooted to eliminate the undesired applications.
According to statistics, we are spending more time with our smartphones than in front of personal computers. And in 2017, mobile applications have become an essential part of our lives. They can be now used to chat with friends, pay taxes, order goods and services, take photos, play interesting games and a lot of other activities. The apex mobile experience was in the year 2000 when Nokia Inc. released her Nokia 3310 which had the popular Snake game application.
THE ENGLISH ALPHABET
According to Wikipedia, “An alphabet is a standard set of letters (basic written symbols) that is used to write one or more languages based upon the general principle that the letters represent phonemes of the spoken language. The modern English alphabet is a Latin alphabet consisting of 26 letters, each having an uppercase and a lowercase form. The letters constitute the ISO basic Latin alphabet and are shown below:
Aa Bb Cc Dd Ee Ff Gg Hh Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Ss Tt Uu Vv Ww Xx Yy ZzAs early as age 3, children are taught to learn the letters of the alphabet. This is done in kindergarten and nursery schools by employment of different classroom teaching methods. These methods may include singing, recitation, dance, drama and so on.
THE USE OF MOBILE APPLICATIONS TO TEACH THE ALPHABET
Some teachers believe that the traditional method of teaching is the only way, while others claim that students need to be more actively involved in the learning process. Dayang et al. (2013) on his part argued that introducing fun and interactive learning to children could be a good enhancement for teaching and learning for young learners. In recent times therefore, various applications are being developed by software programmers to complement the classroom teaching method and help young kids to learn faster.
Games are the most popular digital activity for children aged two to fourteen, with the highest usage penetration among mobile device users (NPD Group, 2007). Digital games fall into a similar category as board games and other self-correcting learning tools and mirror children’s natural play interactions like practice play, make-believe play and games with rules. “Digital games have potential as a tool in teaching preschool-aged children because they can provide instant feedback, are flexible, empower children, and foster active learning.” (Warren Buckleitner, 2012). Apps are rapidly emerging as a new medium for providing educational content to children. According to a study carried out in 2012, most top-selling paid apps in the education category of the iTunes Store target children and over half of all educational apps target preschoolers (Carly et. al., 2012). Oblinger et. al (2013) and R. Van Eck (2006) points that the actual model of learning employed by schools is not following the speed of the children’s reasoning, besides the fact that most of the time, it does not motivate them.
Slowly, mobile applications/games are being employed in schools to teach young children the alphabet, and teachers have observed good results among the students with weak performance on the traditional method (Virvou et. al, 2005).
Some related work include The ABC Game (Bortoloti et. al, 2001), ABC3D (Bhadra et. al, 2016), Learning Jawi alphabets (2014), Sinhala Learning App (Wickramasinghe et. al, 2015), Educative AR App (Rasalingam et. al, 2014), Augmented Biology (), ARTutor (Chris et. al, 2018), etc.
A Mobile application with Augmented Reality to enhance Sinhala Learning experience for children, Wickramasinghe et al. (2015).
The need to teach letters in Sinhala alphabet to children, to improve their cognitive skills and language fundamentals with interactive activities.
To create an environment for learning things with fun and motivational background for children.
A research was conducted on children learning and the psychological factors affecting children understanding of things, a solution was developed, which was an attractive mobile application which can be run on Android operating system. An alphabet was printed on paper. When the user runs the mobile application, the camera of the mobile phone will be turned on. The user is then required to focus the camera on to the printed paper. Immediately the image on the paper is tracked, a 3D version of the letter pops up on top of the paper while the user hears the correct pronunciation of it. In addition, an object which is related to the letter will be displayed on top of the paper. User can interact with the letter and learn the writing patterns of a letter, pronunciation and usage of a letter through the solution.
In the solution, a printed image which is called the image target, is used to generate the 3D model in augmented reality. Once the image is tracked by the camera at the mobile phone, it uploads the JPG image of it to the SDK, downloads the matching coordinates from the extracted image character points from Unity application package. After identifying the correct image target, it keeps the image tracked and generates a three dimensional object based on the tracked image target.
Most students requested to use the Augmented Reality application repeatedly. Teachers and parents also showed a high level of enthusiasm with 85% of the respondents giving extremely positive feedbacks.
There was no use of Audio visuals and animations in this work. Also, this work was based on teaching kids the Sinhala language alone.
Interactive Digital Learning Materials for Kindergarten Students in Bangladesh, Baharul et al. (2013).
The need for high quality and realistic learning environment for children.
To find out playgroup students’ responses to interactive digital material on English alphabets
To identify children’s abilities to cope with digital learning materials.
26 English alphabets with corresponding words and objects were developed as frame by frame. 10 Bengali digits (1 to 10) and corresponding objects were also developed for children to recognize. All frames were then rearranged and produced in the form of interactive learning video for children.
Also, a pre-test was taken on 52 kindergarten children (aged 4-5 years) on the developed learning materials through the help of class teachers. Then the developed interactive learning material was shown to the children using a projector. A video was played and the children were asked individually about alphabets and their corresponding words and objects.
Most of the children recognized the words and objects corresponding to the letters. Only few students could not identify objects with corresponding words. For them, the same video was played again the weak children were asked the second time. Surprisingly, the children were recorded to have recognized objects and words more than the first time. Possibly they got some ideas from first time video display and received help from other successful children.
The table below show the results of the tests carried out before and after playing the interactive learning material. 19 students were recorded to already know the alphabets, digits and words with corresponding objects.
Figure 2.1: Histogram of results of the test carried out on the participants
This solution was based on the learning ability of Bangladeshi students. Also, the solution was presented to the students in the form of video.
Fun Learning with AR Alphabet Book for Preschool Children, Dayang et al. (2013).
The need to teach young children the Alphabet through fun-learning and use of Augmented Reality (AR)
To develop an AR Alphabet book to enhance learning by utilizing AR technology.
To evaluate the influence of an AR book for learning over traditional books.
An Augmented Reality based computer program was created for teaching the alphabet to preschool children. Used together with a webcam and a computer, users can view the superimposed virtual alphabet on the computer screen. ARToolkit software was used to develop the application.
In the developed application, user can view the 3D models of the uppercase and lowercase of each letter. By placing the appropriate pattern on the placeholder indicated on the book page, a model of the respective alphabet letter is displayed over the pattern marker and is seen on the computer monitor. Also, for each alphabet letter words and their corresponding 3D models are displayed.
Observational study carried out at a preschool located in Tronoh, Malaysia to investigate users’ perception towards the AR alphabet book. A class of 15 five-year old students was selected as participants. The students were introduced to the AR book application and then allowed to freely explore the book.
45720051720100Following the study, each student was asked to rate on five short and simple questions survey. The figure below shows participants feedback about the book:
Figure 2.2: Bar chart of participants’ feedback to the AR book
The AR app did not utilize the use of animated 3D models and sound effects.
Due to over-excitement, the students’ hands were placed over the pattern markers, thus hindering it from being detected by the camera.
This research was limited to work on Personal Computers only.
Educating Through Mobile Devices: The ABC Game, a Study Case, Bortoloti et al. (2014).
The need to provide tools for supporting learning through mobile gaming.
To propose and develop the development of an educational electronic game for mobile devices.
To aid playfulness and motivation in children during the study process.
The ABC Game was developed to run on the Android Operating System. For the development, the Eclipse IDE (Integrated Development Environment), the Android SDK (Software Development Kit) and the ADT (Android Developer Tools) Plugin were used.
Some of the games in the ABC Game use the drag-and-drop system to provide a more natural interaction. A ‘Drag and Drop’ class that works with Android 3.0 or higher was used in the development process. The solution used a single and customized View, that is, everything that is seen on the screen by the user is contained in one View that is created and modified dynamically.
The developed game consists of different activities related to helping kids exercise and familiarize themselves with the alphabet. It is a fun and engaging application and serves as a valuable assistance to child literacy.
The game is only available in Portuguese language.
There was no use of animated graphics in this solution.
Conducting Evaluation Studies of Mobile Games with Preschoolers, Laila et al. (2013)
To discuss strategies for evaluating mobile games with three to five year old children with regard to usability and fun aspects.
To shed light on evaluation methods used with children in literature.
To develop a research-based mobile educational game, Hamza.
To describe the evaluation process of the Android game Hamza teaching preschoolers the Arabic Alphabet.
The field study took place at a nursery school in Egypt. Evaluation sessions were conducted on thirteen children. The children were observed as they play the game on two devices: an Android mobile phone and a 7.7 inch Android tablet and notes were taken by the researcher using a printed form.
Surveying the Parents
Parents’ comments and suggestions were also taken to help improve the design of the learning game for children, and also to give valuable feedback on its educational effectiveness for this age group. After releasing the first version of Hamza game on Google Play Store, 90 users were asked to fill an online survey which was then used to make updates to the game.
The game employed attractive audio-visual effects, and was developed using Pre-MEGa framework.
Overall, the children enjoy the game, and most of them asked to play it again. Most children needed no or only minimal help and could get along on their own. Girls were found to ask for help more than the boys, especially when they have several choices to choose from. The survey showed very positive ratings of different aspects of the game as well as how people described their children’s use and benefit of the game. Additionally, the surveys revealed the following dependencies.
Limitations: The game is only available in Arabic language.
ABC3D – Using An Augmented Reality Mobile Game to Enhance Literacy in Early Childhood, Bahdra et al. (2016)
To present unique affordances for learning and to encourage young children to practice techniques to improve reading comprehension
Objective: To develop a custom-designed augmented reality (AR) mobile game that harnesses the motivating power of interest and the affordances of augmented reality to engage children in practicing print-based literacy.
In ABC3D game, a vocabulary list was first designed. Each entry of the vocabulary list contained a letter, a word, and a 3D module which is used for the AR display. One entry sample is described as below:
Letter Word Module Idx’A’ ‘Apple’ Module_Apple’T’ ‘Tree’ Module_Tree… … …
Table 2.1: Entry sample for ABC3D game
The game was designed in two modules: a letter recognition and an object collection game module.
Letter Recognition Module:
The purpose of the letter recognition module was to drive the user to read and write letters by giving a visual representation of an item associated with that letter. The user will first write down a single, capital letter, after which they will scan the letter through the mobile device’s camera. The system will then display the letter’s corresponding model in AR space. Next to the model, a text spelling of the object will appear. By providing the user with a visual context, the user’s interest will be stimulated, motivating them to pursue more independent action. To further tap into this interest, an accompanying game module was developed to reinforce the context from the first module.
Object Collection Game Module
In this module, the objects generated in the first module were stored in a list. For example, if the user scanned the letter “T” and the system displayed a “tree” model, the game would store the tree object in a list. The objects from the list will populate the game world, along with a several other randomly selected objects. The player will be tasked with collecting objects they had scanned in from the first module. Adding a game element that interacts with objects from the first module will enhance engagement and reinforce interest in exploring more vocabulary.
UI and Game Mechanics:
At the start of the application, the player will be placed at a start screen with two options, “Learn” and “Exit.” Upon pressing the exit button, the player will be taken back to the device’s main menu. Upon pressing the learn button, the player will progress to the scan menu. There, they will be prompted to scan a capital letter, ideally one written by the player.
Recognition with AR
At the scan screen, the system will scan in the capital letter, and generate a list of objects that start with the scanned letter. The objects will be rendered in AR space along with a text name. From there the player will be presented with a “play” button at which point they will be transferred to the object collection game proper.
Fig. 2.3: Sift feature visualization
During the scanning process, the system will first extract visual features from images.
Several letter templates were compiled in the system and the respective features of each template were extracted; the visualization of the features of a template is shown in Fig. 2.3 above. When a letter is scanned, the letter features are pair matched to the features in the templates and the best match is defined as the recognized letter.
Unity 3D was used to create the Augmented Reality software. For image processing, Vuforia 5 SDK was used. The device camera will scan the letter and parse it through the Vuforia text recognition library. The game will then compare the information with a cloud database and retrieve the appropriate item name (such as “tree” for T).
Fig.2.4: A running example of the AR software
After the letter is identified, the next step overlays it with the AR element. The Unity 3D game design engine can easily integrate AR components into a mobile interface. As seen Fig. 2.4 above, the application successfully detects the location of the written letter “T”. The AR module also displays the model statically; the image positioning and orientation stayed constant despite testing several different viewing angles and lightning conditions.
ABC3D made use of unanimated 3d models.
There was no audio to accompany 3D models
The Development of Educational Augmented Reality Application: A Practical Approach, Panteli? et al. (2017)
To present a practical approach that underpins the development of AR applications aimed at the secondary or primary level students.
To explore advantages and disadvantages of using virtual and augmented reality applications in educational settings.
To analyze pedagogical approaches that underpin the development of AR applications used in education.
The Analysis Phase
The Analysis phase started with the selection of the AR tool that would enable the educator to create augmented reality educational content. The choice of the tool depended on two factors: current hardware equipment and criteria for the selection of AR development tool. At educator’s disposal, there was a tablet Apple iPad mini and a laptop with several tools for media editing. The criteria for the selection of the AR tool was based on criteria such as free download of the tool, stability of the tool, display quality and the usage of the tool without the need for additional equipment. First, twelve AR tools for the development of AR content were evaluated to select the most appropriate one. Out of twelve tools, the tool Aurasma Studio had met all required criteria, followed by the tools Augment and Tellagami. The next step was to get acquainted with the functionalities of the Aurasma tool from the educator’s perspective and then to identify prerequisites that need to be fulfilled in order that the student could use a created AR content. To view the created content, the user needs a mobile device with installed Aurasma application and the Internet connection. Interaction with the content within Aurasma does not require special knowledge and effort since it is based on a finger touch. Finally, the subject of learning was determined: to display and explain computer components such as a hard disc, a motherboard, RAM memory and a processor. The target audience was the pupils of higher grades of elementary schools or the students of high schools who should learn about and differentiate various computer components, explain the component’s parts and its function and specify significant manufacturers of a particular computer component.
The idea was developed on Aurasma’s working principle that is marker-based. This means that a marker is needed, which could be an image, printed or displayed on the screen. When the marker image is scanned using Aurasma, the corresponding content is displayed and further interactions with the content could occur.
A printed image of the laptop casing components was used. By scanning that image with Aurasma, the application would display the name of a particular component and a picture of the component in a thumbnail. By pressing/tapping a thumbnail of a particular component, a larger image should be displayed and three options could be selected: a large component image, a 360º video of the component or basic information about the component displayed with animated text accompanied by the narrative.
To create AR content, the usage of various editing tools was planned: PowerPoint and its plugin Office Mix, Adobe Photoshop, Fyuse and Aurasma Studio. PowerPoint enabled integration of textual information with the pictures, sounds and video on the slides, and the slides could be exported in a video format as well. Its usage was to introduce basic information about each computer component such as component’s task, basic parts, capacity, most known manufacturers of this component etc. Information was delivered in animated textual form accompanied by the voice recordings acquired by the Office Mix, and then exported in a video format that could be integrated in Aurasma Studio. In addition, PowerPoint was used to create thumbnails and image frames, while Photoshop was used for image editing and creation of a transparent image backgrounds. Fyuse is a mobile application that enables recording of a 360º video, so an interactive video of each computer component was made. Aurasma Studio is an online tool that enables integration of multimedia elements into AR content, so it was used to link a marker image with created multimedia which overlay the image and also enable interactions throughout the content.
The Development Phase
The Development started with the marker image. For this purpose, the HP ProBook 4730s laptop was disassembled and a photo of the computer components was taken. The marker image is shown in Fig. 2.5 below.
Fig. 2.5: A marker image of a system unit
When scanning the marker image using the Aurasma application installed on a mobile device, names and thumbnails of computer components should appear. Since the content was created for four computer components, a marker image should be covered by four so called “overlays”, one for each component, because it should be possible to select each one to display additional information about the component. These four overlays and a marker image are shown in Fig. 2.6. The marker image in Fig. 2.6 is shown only as a landmark that determines how the overlays should be positioned in relation to the marker image when scanned.
Fig. 2.6: First level overlays
Each overlay in Fig. 2.6 has its own three overlays: one that displays only a component image, another that allows the selection of additional information about the selected component, and the third one that links to a 360° video display of the selected component. If additional information overlay is selected, video overlay with basic information about the selected component is displayed. As already mentioned, the video was created from the PowerPoint slides with text animations and recorded narration. A 360° video of a computer component was recorded using the Fyuse application installed on the tablet. Recording was based on moving the camera around the object that is recorded.
The solution used video and text to teach students.
SYSTEM ANALYSIS AND DESIGN
This chapter describes the design, input, output and processing requirements of the processing requirements of the proposed system and the hierarchical design of the proposed system. It also described/defines different components that make up the system, how they are connected together, what the system does and how it operates.
A system architecture or systems architecture is the conceptual model that defines the structure, behavior and views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structure of the system which comprises of system components. The externally visible properties of those components and the relationships between them.
Animated 3D Objects
VUFORIA SDK TARGET SERVICE TO DETECT TARGET IMAGES
Figure 3.1: Architecture of the system
Animated 3D Objects
VUFORIA SDK TARGET SERVICE TO DETECT TARGET IMAGES
Figure 3.1: Architecture of the system
USE CASE OF THE SYSTEM
60605616746VIEW OVERLAID INFORMATION
SCAN IMAGE TARGET
00VIEW OVERLAID INFORMATION
SCAN IMAGE TARGET
left765930Figure 3.2: System Use Case Diagram
00Figure 3.2: System Use Case Diagram
C# programming language was used for the development of the system. C# is a general-purpose object oriented programming language. It is widely used in creation of games with the unity game engine.
SOFTWARE AND IDE
Maya is a 3D computer graphics program created by Autodesk, used to model, animate, and render 3D scenes. 3D scenes created with Maya have appeared in movies, television, advertisements, games, product visualizations, and on the Web. Maya can be used to create and animate 3D scenes and render them as still images or as animation. Autodesk Maya can also be used to create virtual environments for use in virtual reality and augmented reality projects. For the purpose of this project, Maya was used to model, rig and animate 3D models 3D models which were then exported into Unity 3D engine.
Unity 3D is a game development platform used to build high-quality 3D games for desktop computers, mobile devices (Android, Iphone) and console. Unity provides an excellent entry point into game development, and can be used to create both 2D games and games that require 3D features. Unity also supports creation of augmented reality applications through the use of Vuforia SDK service.
Vuforia is an Augmented Reality Software Development Kit for mobile and desktop devices that enables the creation of Augmented Reality applications. It uses computer vision technology to recognize and track planar images and 3D objects. Vuforia provides Application Programming Interfaces (APIs) in .NET languages through an extension in Unity game engine, thereby supporting development for IOS and Android devices.
Corel Draw is an essential graphics software for graphics design. It is an excellent tool for creating graphics contents like logos, banners, photo editing and for interface designs. Corel Draw was used to design the graphics background and icons for the system.
Android Studio is an Integrated Development Environment (IDE) for android platforms developed by Google. It is used to build applications that run on Android Operating System.
3D modeling (or three-dimensional modeling) is the process of developing a graphical representation of any three-dimensional surface of an object (either inanimate or living) via specialized software. The product is called a 3D model, and can be exported in different formats for use in other 3D software. The modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. For this project, Autodesk Maya was used for modelling of 3D objects, characters and environments.
SCANNING AND OBJECT RECOGNITION
The scanning and object recognition process requires the Vuforia SDK, an image target database that must be downloaded from Vuforia’s website, a license key and a Camera for AR capturing. The image target is the image to be scanned and recognized. If the target matches one stored on the Vuforia database, a message will be sent to the Vuforia Target Recognition System (VTRS) and specific metadata (3D models, in this case) will be obtained and displayed on the device’s augmented view.
C# scripts were added to achieve some tasks such as rotating the 3D models, scaling the models and taking a snap shot of the animals on the augmented scene (which can be saved on the user’s device photo gallery). Scripts were written on monodevelop.
The following hardware devices are required for smooth develop of the system:
2GB Random Access Memory
Minimum of 50GB hard drive
Monodevelop or Visual Studio
The user interface was designed using Unity’s U.I. system, Corel Draw and Monodevelop. Because Unity’s U.I system has limited functionalities, it is complemented with Corel Draw which allows the interface to be designed to taste. Monodevelop was also employed in designing the U.I as scripts were attached to the texts and buttons to perform their required operations.
THE APPLICATION MENU
The application menu comes up when the application is launched. The interface presents the user with five options: The letters menu, the library menu, help menu, exit menu and settings menu options.
The letters menu option when tapped, pops up another page, the ‘AR page’, the library option leads to the ‘Library page’, the help option leads to the ‘Help page’, the Settings option leads to the ‘Settings page’ and the exit option closes the program. Each page is further discussed later.
Below is the diagram of the main menu:
84391516510Figure 4.1: The main interface of the system
00Figure 4.1: The main interface of the system
The AR page comes up when the Scan button is tapped. The AR page interface shows the device’s camera view along with some menu buttons on the screen. With the use of the camera, a user can place an image target under the camera to be scanned and detected by the application. Once the target marker is detected, a 3D model that matches it pops up on the screen, augmenting the virtual scene with the physical environment as viewed by the phone camera. Also, sound matching the image target also begins to play while the camera tracks the target. Once the target is removed, the virtual scene immediately disappears and the audio stops playing.
This process creates fun and excitement to children and they gain interest while adding to their knowledge of learning the alphabet.
The AR page is shown in Fig. 4.2 below:
The library page consists of 26 menu items representing each of the 26 letters of the alphabet. When any button of the menu is clicked, another page pops up and shows a 3D scene and animated models and plays sound matching the clicked button.
This way, users can learn the alphabet by clicking any alphabet of their choice without having to use the image target.
The Library page is shown in Fig. 4.3 below:
10934705795010Figure 4.3: The interface of the library page
00Figure 4.3: The interface of the library page
The help page contains a short tutorial on how new users can use the application.
The interface for the help page is shown in Fig. 4.4.
Fig. 4.4: Snapshot of the help page.
The settings page contains basic settings for the application, like muting of background music.
CHAPTER FIVE CONCLUSION AND RECOMMENDATION
Agogi, E. (2011). Augmented Reality in Education. Open Classroom Conference. Greece: Epinoia S.A.
Ahmad, F. Z. (2012). Jawi Alphabets in Augmented Reality. Kuala Lumpur.
Ammar H. Safar, A. A.-J.-Y. (2017). The Effectiveness of Using Augmented Reality Apps in Teaching the English Alphabet to Kindergarten Children: A Case Study in the State of Kuwait. Journal of Mathematics Science and Technology Education, 417-440.
Arjun Bhadra, J. B.-J. (2016). ABC3D – Using An Augmented Reality Mobile Game to Enhance Literacy in Early Childhood. IEEE International Conference on Pervasive Computing and Communications Work in Progress.
Dayang Rohaya Awang Rambli, W. M. (2013). Fun Learning with AR Alphabet Book for Preschool Children. International Conference on Virtual and Augmented Reality in Education (pp. 211 – 219). Malaysia: Elsevier B.V.
Gunasekara, W. A. (2015). A Mobile Application with Augmented Reality to Enhance Sinhala Learning Experience for Children. Sri Lanka: Faculty of Information Technology, University of Moratuwa.
Lemieux, M. M. (2015). Augmented Reality: Applications, Challenges and Future Trends.
Md. Baharul Islam, D. M. (2013). Interactive Digital Learning Materials for Kindergarten Students in Bangladesh. Bangladesh.
Nor Farhah Saidin, N. D. (2015). A Review of Research on Augmented Reality in Education: Advantages and Applications. International Education Studies.
Papp, H. K. (2013). Learning Objects For Education With Augmented Reality. Austria.
Rasslenda-Rass Rasalingam, B. M. (2014). Exploring the Application of Augmented Reality Technology in Early Childhood Classroom in Malaysia. Journal of Research & Method in Education, 33-40.
Vukovac, A. P. (2017). The Development of Educational Augmented Reality. Proceedings of ICERI2017 Conference. Spain.
Zabidi, A. B. (2014). Learning Jawi Alphabets using Augmented Reality. Malaysia.