close
test_template

Abstract Computer Science

Human-Written
download print

About this sample

About this sample

close
Human-Written

Words: 2066 |

Pages: 5|

11 min read

Published: Jul 17, 2018

Words: 2066|Pages: 5|11 min read

Published: Jul 17, 2018

Table of contents

  1. Abstract
  2. I.Objective and Scope
  3. Ii. Problem Statement
  4. III. Introduction computer
  5. Iv . Literature Survey S.
  6. V. Proposed System Approach
  7. VI. Mathematical Model
  8. Vii. Techniques Used Stemming

Abstract

Computer science and many of its applications are about developing, analyzing, and applying algorithms. Efficient solutions to important problems in various disciplines other than computer science usually involve transforming the problems into algorithmic ones on which standard algorithms are applied. Scholarly Digital documents are increasing day by day. To automatically find and extract these algorithms in this vast collection of documents that enable algorithm indexing, searching, discovery, and analysis. AlgorithmSeer, a search engine for algorithms, has been investigated as part of CiteSeerX with the intent of providing a large algorithm database.

A novel set of scalable techniques used by AlgorithmSeer to identify and extract algorithm representations in a heterogeneous pool of scholarly documents is proposed. Along with this,anyone with different levels of knowledge can access the platform and highlight portions of textual content which are particularly important and relevant.The highlighted documents can be shared with others in support of lectures and self-learning. But the highlighted part of text cannot be useful to different levels of learners.This paper also solves the problem of predicting new highlights of partly highlighted e-learning documents.

I.Objective and Scope

  • To identify algorithm representations in a document.
  • To extract algorithm representations in a document.
  • To facilitate algorithm indexing, searching.
  • To increase the productivity of scientists.
  • To consider the proficiency level of the highlighting users to drive the generation of new highlights.
  • Our system is useful in Computer Science.
  • It is helpful to algorithm searchers.
  • It is useful in forming web-based scientific literature digital library.

Ii. Problem Statement

Manually searching for newly published algorithms is a nontrivial task. Researchers and others who aim to discover efficient and innovative algorithms would have to actively search and monitor relevant new publications in their fields of study to keep abreast of latest algorithmic developments. The problem is worse for algorithm searchers who are inexperienced in document search. Ideally, we would  like to have a system that automatically discovers and extracts algorithms from scholarly digital documents. Such a system could prove to facilitate algorithm indexing, searching, and a wide range of potential knowledge discovery applications and studies of the algorithm evolution, and presumably increase the productivity of scientists.

III. Introduction computer

Science is about developing, analyzing, and applying algorithms. Efficient solutions to important problems in various disciplines other than computer science usually involve transforming the problems into algorithmic ones on which standard algorithms are applied. Furthermore, a thorough knowledge of state of-the-art algorithms is also crucial for developing efficient software systems. Standard algorithms are usually collected and catalogued manually in algorithm textbooks, encyclopaedias, and websites that provide references for computer programmers.

While most standard algorithms are already catalogued and made searchable, especially those in online catalogs, newly published algorithms only appear in new articles. The explosion of newly developed algorithms in scientific and technical documents makes it infeasible to manually catalog these newly developed algorithms. Manually searching for these newly published algorithms a nontrivial task. Researchers and others who aim to discover efficient and innovative algorithms would have to actively search and monitor relevant new publications in their fields of study to keep abreast of latest algorithmic developments. The problem is worse for algorithm searchers who are inexperienced in document search.

We would like to have a system that automatically discovers and extracts algorithms from scholarly digital documents. Such a system could prove to facilitate algorithm indexing, searching, and a wide range of potential knowledge discovery applications and studies of the algorithm evolution, and presumably increase the productivity of scientists. Since, algorithms represented in documents do not conform to specific styles, and are written in arbitrary formats, this becomes a challenge for effective identification and extraction.E-learning platforms are complex systems that aims to support e-learning activities with help of electronic devices like laptop, tablets, smartphones, etc. Generally, such types of e-learning activities consist of textual documents.

Due to the ever-increasing number of electronic documents retrievable from heterogeneous sources, the manual inspection of these teaching materials may become practically unfeasible. Hence, there is a need for automated analytics solutions to analyze electronic teaching content and to automatically infer potentially useful information.

Highlights are graphical signs that are usually exploited to mark part of the textual content. The manual generation of text highlights is time-consuming, i.e.,it cannot be applied to very large document collections without a significant human effort, and prone to errors for learners who have limited knowledge on the document subject. Automating the process of text highlighting requires generating advanced analytical models able to (i) capture the underlying correlations between textual contents and (ii) scale towards large document collections. In our proposed system we consider the proficiency level of the highlighting users to drive the generation of new highlights.

Iv . Literature Survey S.

Kataria et.al,[1]considers two-dimensional plots (2-D) in digital documents on the web as an important source of information that is largely under-utilized. How data and text can be extracted automatically from these 2-D plots is described, thus it will eliminate a time consuming manual process. Information extraction algorithm presented in this paper identifies the axes of the figures, extracts text blocks like axes-labels and legends and identifies data points in the figure. It also extracts the units appearing in the axes labels and segments the legends to identify the different lines in the legend, the different symbols and their associated text explanations.

Proposed algorithm also performs the challenging task of separating out overlapping text and data points effectively. Document analysis of mathematical texts is a challenging problem even for born-digital documents in standard formats. J. B. Baker et. al,[2] considers present alternative approaches addressing this problem in the context of PDF documents. One uses an OCR approach for character recognition together with a virtual link network for structural analysis. The other uses direct extraction of symbol information from the PDF file with a two-stage parser to extract layout and
expression structures.

With reference to ground truth data, we compare the effectiveness and accuracy of the two techniques quantitatively with respect to character identification and structural analysis of mathematical expressions and qualitatively with respect to layout analysis. Algorithms are an integral part of computer science literature. S. Bhatia et. al,[3] describe a vertical search engine that identifies the algorithms present in documents and extracts and indexes the related metadata and textual description of the identified algorithms. This algorithm specific information is then utilized for algorithm ranking in response to user queries. D. M. Blei, A. Y. Ng et. al,[4] describes latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics.

Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. This paper present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. J. Kittler et. al,[5] describes a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions the sum rule outperforms other classifier combinations schemes.

Asensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically. Efficient algorithms are extremely important and can be crucial for certain software projects. S. Bhatia, S. Tuarob et. al,[6] proposed an algorithm search engine that keep abreast of the latest algorithmic developments. All the documents in the repository are first converted into text using an pdf to text converter. The extracted text is then analyzed to find algorithms which are then indexed along with their associated meta-data. The query processing engine accepts the query from the user through the query interface, searches the index for relevant algorithms, and presents a ranked list of algorithms to the user.

Latent Dirichlet analysis, or topic modeling, is a flexible latent variable framework for modelling high-dimensional sparse count data. Various learning algorithms have been developed in recent years, including collapsed Gibbs sampling, variational inference, and maximum a posterior estimation, and this variety motivates the need for careful empirical comparisons. TA. Asuncion et. al [7] highlight the close connections between these approaches. When the hyperparameters are optimized, the differences in performance among the algorithms diminish significantly. The ability of these algorithms to achieve solutions of comparable accuracy gives us the freedom to select computationally efficient approaches. CP. Chiu et.al,[8] present a method for picture detection in document page images, which can come from scanned or camera images, or rendered from electronic file formats.

Described method uses OCR to separate out the text and applies the Normalized Cuts algorithm to cluster the non-text pixels into picture regions. A refinement step uses the captions found in the OCR text to deduce how many pictures are in a picture region, thereby correcting for under-and over-segmentation. S. Bhatia and P. Mitra[9] present the first set of methods to extract useful information (synopsis) related to document-elements automatically. Naive Bayes and support vector machine classifiers are used to identify relevant sentences from the document text based on the similarity and the proximity of the sentences with the caption and the sentences in the document text that refer to the document-element. G. W. Klauet.al,[10] considers the fractional prize-collecting Steiner tree problem on trees. This paper presents three algorithms for solving the fractional prize-collecting Steiner tree problem (PCST problem) on trees G = (V,E). Newton’s algorithm has a worst-case running time of O(|V |2).

Paper also present a variant of parametric search and proved that the worst case running time of this new algorithm is O(|V | log |V |). Computational results show that Newton’s method performs best on randomly generated problems while a simple binary search approach and method proposed in paper are considerably slower. For all three algorithms, the running time grows slightly faster than linear with the size of our test instances.

V. Proposed System Approach

In our proposed system documents are processed to find out algorithm present in documents. User submit query to system. Textual metadata contains relevant information about detected algorithm. After processing document textual metadata is extracted. Then this metadata is indexed. Query processing is done on metadata and final results are returned to user. Non-textual content occurring in the
text is automatically filtered out before running the learning process. Two text processing steps are applied: (i) stemming and (ii) stopword elimination.

To analyze the occurrence of single terms in the sentence text, after stemming and stopword elimination the sentence text is transformed into a term frequency-inverse document frequency (of-IDF) matrix. If in the training dataset there is no information about the level of knowledge of the users, one single classification model is generated and used to predict new highlights. Otherwise, the knowledge level of the highlighting users is considered because it is deemed as relevant to perform accurate highlight predictions. A prototype of an algorithm search engine, Algorithm Seer, is presented. Plain text is extracted from the PDF file. We use PDFBox to extract text and modify the package to also extract object information such as font and location information from a PDF document.

Then, three sub-processes operate in parallel, including document segmentation, PC detection, and AP detection. The document segmentation module identifies sections in the document. The PC detection module detects PCs in the parsed text file. The AP detector first cleans extracted text and repairs broken sentences then identifies APs. After PCs and APs are identified, the final step involves linking these algorithm representations referring the same algorithms together. The final output would then be a set of unique algorithms.

VI. Mathematical Model

S={s,e,i,F,o}

S represents our proposed system.

s represents start state of the system.

I represents input of the system i.e.

PDF Documents, Highlighted Documents.

o represents output of the system i.e.

set of unique algorithms along with the highlighted points with user level

e represents end state of the system.

F ={f1,f2,f3,f4,f5,f6,f7,f8,f9}

represents Functions of the system.

f1=Document_Segmenter.

f2=Pseudo_code_detector.

f3=Text_cleaner.

f4=Sentence_Extracter.

f5= algo_procedure_detector.

F6=algo_linker.

F7=Stemming.

F8=Stop_word_Remove

F9=TF_IDF_Calculation

Vii. Techniques Used Stemming

This process reduce the words to their base or root form. This step, which can be enabled or disabled according to the user’s preferences, remaps the textual content to a reduced number of word roots. For example, nouns, verbs in gerund form, and past tenses are re-conducted to a common root form. This step is particularly useful for reducing the bias in classification when statistics based text analyses are performed. Stopword elimination. This process is used to filter weakly informative words i.e. the stopwords. Examples of stop words are articles, prepositions, and conjunctions. In text analyses, these words are almost uninformative for predicting highlights and, thus, should be ignored. The tf-idf matrix. To find out the importance of word stem in text, term frequency-inverse document frequency (tf-IDF) evaluator is used. Algorithm see:

Get a custom paper now from our expert writers.

Steps:

  1. Plain text is extracted from the PDF file.
  2. Then, three sub-processes operate in parallel, including document segmentation, PC detection, and AP detection. Document segmentation module identifies sections in the document. ? The PC detection module detects PCs in the parsed text file. ? The AP detector first cleans extracted text and repairs broken sentences then identifies APs.
  3. After PCs and APs are identified, the final step involves linking these algorithm representations referring the same algorithms together.
  4. The final output would then be a set of unique algorithms.
Image of Alex Wood
This essay was reviewed by
Alex Wood

Cite this Essay

Abstract Computer science. (2018, May 26). GradesFixer. Retrieved December 8, 2024, from https://gradesfixer.com/free-essay-examples/abstract-computer-science/
“Abstract Computer science.” GradesFixer, 26 May 2018, gradesfixer.com/free-essay-examples/abstract-computer-science/
Abstract Computer science. [online]. Available at: <https://gradesfixer.com/free-essay-examples/abstract-computer-science/> [Accessed 8 Dec. 2024].
Abstract Computer science [Internet]. GradesFixer. 2018 May 26 [cited 2024 Dec 8]. Available from: https://gradesfixer.com/free-essay-examples/abstract-computer-science/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now