International Journal of Engineering
Trends and Technology

Research Article | Open Access | Download PDF

Volume 8 | Number 3 | Year 2014 | Article Id. IJETT-V8P294 | DOI : https://doi.org/10.14445/22315381/IJETT-V8P294

Multiple Lecture Video Annotation and Conducting Quiz Using Random Tree Classification


V.Anusha , J.Shereen

Citation :

V.Anusha , J.Shereen, "Multiple Lecture Video Annotation and Conducting Quiz Using Random Tree Classification," International Journal of Engineering Trends and Technology (IJETT), vol. 8, no. 3, pp. 522-525, 2014. Crossref, https://doi.org/10.14445/22315381/IJETT-V8P294

Abstract

In the modern days, the institutes of education are showing more interest in the field of E-Learning or in the field of internet based educational services. The growth of video lecture annotation tools also plays an essential role in the education environment. At first, the CLAS (Collaborative Lecture Annotation System) was limited to a single study-test episode .To deal with challenge we are proposing MLVA (Multiple Lecture Video Annotation) tool, developed to make the extraction of important information. The primary of concept MLVA is straight forward. While watching a video captured lecture, each student indicates key points in the lecture by a simple button press. Each button press represents a point-based semantic annotation and indicates, “For this user somewhat important happened at this point in the lecture.” The system relies on semantically constrained annotation, post-annotation data amalgamation and transparent display of this amalgamated data. As a future enhancement focus on conducting Quiz to the students and Instructor.

Keywords

CLAS, MLVA

References

[1]R. Farzan and P. Brusilovsky, “AnnotatEd: A Social Navigation and Annotation Service for Web-Based Educational Resources, “New Rev. in Hypermedia and Multimedia, vol. 14, pp. 3-32, 2008.
[2] M. Ketterl,J. Emden, and O. Vornberger, “Using Social Navigation for Multimedia Content Suggestion,” Proc. IEEE Fourth Int’lConf. Semantic Computing, 2010.
[3] B. Hosack, C. Miller, and D. Ernst, “VideoANT: Extending Video Beyond Content Delivery through Annotation,” Proc. World Conf.E-Learning in Corporate, Govt., Healthcare, and Higher Education,pp. 1654-1658, 2009
[4]X. Mu, “Towards Effective Video Annotation: An Approach to Automatically Link Notes with Video Content,” Computers Education, vol. 55, pp. 1752-1763, 2010.
[5]M.J. Weal, D. Michaelides, K.R. Page, D.C. De Roure, E. Monger, and M. Gobbi, “Semantic Annotation of Ubiquitous Learning Environments,” IEEE Trans. Learning Technologies, vol. 5, no. 2, pp. 143-156, Apr.-June 2012.
[6]S. Chandra, “Experiences in Personal Lecture Video Capture, ”IEEE Trans. Learning Technologies, vol. 4, no. 3, pp. 261-274, July-Sept. 2011.
[7]J. Steimle, O. Brdiczka, and M. Muhlhauser, “CoScribe: Integrating Paper and Digital Documents for Collaborative Knowledge Work,” IEEE Trans. Learning Technologies, vol. 2, no. 3, pp. 174-188,July-Sept. 2009.
[8]C. Hermann and T. Ottmann, “Electures-Wiki - Towards Engaging Students to Actively Work with Lecture Recordings,” IEEETrans. Learning Technologies, vol. 4, no. 4, pp. 315-326, Oct.-Dec.2011.
[9]A. Clark, Natural-Born Cyborgs: Why Minds and Technologies Are Made to Merge. Oxford Univ., 2003.

Time: 0.0014 sec Memory: 32 KB
Current: 1.87 MB
Peak: 4 MB