New user registration
Event organizer registration
Login via your institution account
Login via your organization account
Login via your publisher account
Login using your new user credentials
Forgot username or password?
Subscribe to SciArchives
Computer Science >
Computer Science (General)
Journal of Applied Information Science
Publisher:Publishing India Group
Editor in chief:Mansaf Alam
Your selection(s) could not be saved due to an internal error. Please try again.
Browse by Volumes
Usage Of E-Books By Teaching Professionals: An Empirical Study
Author:Bharti Motwani, Sharda Haryani, Sukhjeet Matharu
Volume: 1 | Issue no: 2-2013 | Pagination: 22-34
Today, a greater number of electronic resources are available for retrieving information but locating relevant information in a timely manner is critical for teaching profession. Electronic books (eBooks) are one way to enhance the digital library with global 24-hours-a-day and 7-days-a-week access to easy, quick, and effective information. During the last decade, libraries and publishers have made a successful move in providing online journals and database. But, the perspective of teaching professionals and academicians on e-books is unrevealed. Electronic textbooks and reading devices had to improve teaching professionals learning experience, in order to be effective. The study was undertaken to investigate the extent of usage and acceptability of e-books from teaching professionals perspective. This study is also undertaken to examine the difference in perception of usage of e-books on the basis of teaching professionals of varied discipline of management courses. This study is based on the primary data collected from 150 respondents in Indore city. Results of this study will be beneficial to authors/ publishers uploading their e-books.
Implementation Of Location Based Authentication For A Remote Client
Author:Laxmi Arun, Mahima M. S., Rashmi R., Roshini Prasad G., Sachin Jain S.
Volume: 1 | Issue no: 2-2013 | Pagination: 17-21
Mobile networks provide distinct set of services for the user in which authentication is one of the imperative and significant types of services. There are various factors of authentication. Some of them are passwords, security token, retinal scan, fingerprint, and other bio-metric data, etc. The other new factor is the user's location which can also be used as one of the criteria for remote client authentication. However, the information of location is private and can be misused; hence distinct procedures should be implemented to ensure its integrity. In this paper, we propose the use of location based services to provide authentication to a remote client. As a use case, we consider the Automated Teller Machine (ATM), which is one of the most popular targets of fraud today. A remote client is provided with necessary authentication using the location based services, defined by the Expand LRAP (Location based Remote client Authentication Protocol), so as to make him aware of a genuine ATM where he can carry out secure transactions.
Mining Fuzzy Amino Acid Associations In Peptide Sequences Of Herpes Simplex Virus
Volume: 1 | Issue no: 2-2013 | Pagination: 11-16
Herpes is a usually mild recurrent skin condition in which most infections are unrecognized and undiagnosed. The mechanism of disease is still not well understood. The analysis of peptide sequences of herpes can reveal information which may be useful for understanding the mechanism of disease. In this paper an attempt has been made to develop a model for mining fuzzy amino acid associations in peptide sequences of herpes virus. The uncertainty arising due to variation in length of sequences and this is handled by employing fuzzy sets. Total 9160 sequences were taken from National Centre for Biotechnology Information. After that around 4004 non-redundant peptide sequences of herpes virus filtered to form the dataset. This dataset is trnasformed to fuzzy transaction dataset and their fuzzy support and confidence have been computed. The patterns generated from this model can be useful in understanding the structure, function and interaction of the protein in the disease.
Procoop, A Program That Predicts The Cooperatives Of Hydrogen Exchange In Proteins
Volume: 1 | Issue no: 2-2013 | Pagination: 7-10
Each protein adopts a particular, well-defined, unique three-dimensional (3D) structure, which is directed to do certain functions. 3D structures of proteins can be well-determined at atomic level resolutions using x-ray crystallography and nuclear magnetic resonance (NMR) techniques. These high resolution structures of proteins are essential for understanding their structurefunction relationships. However, there is a divergent correlation between these experimental outcomes and requirements of current research in structural biology. In this background, ProCOOP algorithm was developed to predict population of secondary structures in proteins based on molecular mass of their deuterated-forms. By taking into consideration of many different structural and environmental factors, the ProCOOP validates its outputs and gives suggestion to improve experimental conditions for better predictions. The applications of ProCOOP for the data analysis in proteomics and genomics have also been brought into fore in detail.
Pre-Eminent Performance In A Multi-Cache Memory With All Level Replacement: An Analytical Study
Author:Rasmi Prakash Swain, Debabala Swain, Bijay Paikaray
Volume: 1 | Issue no: 2-2013 | Pagination: 1-5
Cache memory is a high speed & fast memory. It can reduce the speed in between memory & processor. CPU takes the help of different level of cache memory to find the adequate data. If it not finds the data in level 1 cache. L1 then it access L2. For the assessment of data different page replacement algorithms are used. The replacement algorithms which works efficiently on L1 may not be as efficient on L2. So different access patterns are required. Cache memory works with principle called principle of locality. Whenever a page/word/block is requested from CPU, first of all it is searched on L1 if the required page is found in L1 it is a hit else a miss. When L1 is saturated and it is a miss then a block from L1 is to be evicted to create a space for the required page. Different page replacement algorithms such as LRU, LFU and FIFO are used on various pairs of cache hierarchy for result analysis. This paper motivates the new researchers to develop a new novel replacement algorithm that can be tested with performance superior to other existing algorithms.
An Empirical Study On Extension Of Three Phase Commit Protocol For Concurrency Control In Distributed System
Volume: 1 | Issue no: 1-2013 | Pagination: 38-40
In distributed system two phase commit protocol and three phase commit protocol are used for concurrency control. But both protocol has disadvantage of blocking and global abortion (of transaction) respectivily. In my research paper, I have designed an algorithm for successfully submition of transaction even if any site (primary or secondary site) has voted to abort. For this I have used a table TIT (Transaction Information Table) which has three fields namely-1) Transaction Id 2) Site Id 3) Value and three messages for communication purpose between sites. I have also used a local clock which run at regular interval for checking the value of database object (describe in detail in further part of paper). For any transaction if any site votes to abort so it's save TIT's value="Inconsistent" with respect to that site and transaction on other sites commit suceessfully. So there is inconsistency that has arisen. To remove this inconsistency I have introduced the concept of local clock which runs at each site. This clock runs at regular intervals and checks for the database objects for which the TIT's value= "Inconsistent". Then it issues a transaction to remove this inconsistency. Recovery transaction will be originated by coordinator through message passing. After receiving the message inconsistent site will update its database by fetching the transaction related information from either their nearest site or the coordinator.
Relative Performance Of A Multi-Level Cache With Last-Level Cache Replacement: An Analytical Review
Author:Debabala Swain, Bijay Paikaray
Volume: 1 | Issue no: 1-2013 | Pagination: 33-36
Current day processors employ multi-level cache hierarchy with one or two levels of private caches and a shared last-level cache (LLC). Replacement policy at the Last Level Cache is vital due to reduction of off-chip memory latency as well as conflict for memory bandwidth. Cache replacement techniques for inclusive LLCs may not be efficient for multilevel cache as it can be shared by enormous applications with varying access behavior, running simultaneously. One application may dominate another by flooding of cache requests and evicting the useful data of the other application. This paper analyzes some of the existing replacement techniques on the LLC with their performance assessment.
Probabilistic Segmentation Methods For Early Detection Of Uterine Cervical Cancer
Author:Abhishek Das , Avijit Kar, Debasis Bhattacharyya
Volume: 1 | Issue no: 1-2013 | Pagination: 29-31
Uterine Cervical Cancer is one of the prevalent forms of cancer in women worldwide. Most cases of cervical cancer can be prevented through screening programs aimed at detecting precancerous lesions. In this paper, novel methods have been proposed for automated probabilistic image segmentation of cervical cancer. The detection of cervical lesions is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on a validation metric against the estimated composite latent gold standard, which was derived from several expert's manual segmentations. The distribution functions of the lesion and control pixel data were parametrically assumed to be a mixture of probability distributions with different shape parameters. We also estimated the corresponding receiver operating characteristic (ROC) curve over all possible decision thresholds. The automated segmentation yielded satisfactory accuracy with protean optimal thresholds.
Clustering And Classifying Diabetic Data Sets Using K-Means Algorithm
Author:M. Kothainayaki, P. Thangaraj
Volume: 1 | Issue no: 1-2013 | Pagination: 24-27
The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present the Classification of diabetic's data set and the k-means algorithm to categorical domains. Before classify the data set preprocessing of data set is done to remove the noise in the data set. We use the missing value algorithm to replace the null values in the data set. This algorithm is also used to improve the classification rate and cluster the data set using two attributes namely plasma and pregnancy attribute.
Machine Learning Based Architecture For Rule Establishment Of Web Proxy Server
Author:P.S. Banerjee*, G. Sahoo, Umesh Prasad
Volume: 1 | Issue no: 1-2013 | Pagination: 15-22
In present scenario Internet has become an integral part of every ones life, as many services like mail, news, chat are available and huge amounts of information on almost any subject is available. However, in most cases the bandwidth to connect to the Internet is limited. It needs to be used efficiently and more importantly productively. Generally, bandwidth is distributed among groups of users based on some policy constraints. However, it turns out that the users do not always use the entire allocated bandwidth at all times. Also, some times they need more bandwidth than the bandwidth allocated to them. Ideally, productive usage should be preferred over unproductive usage when bandwidth is scarce. But when it is abundant then any kind of use can be permitted provided it is in consonance with policy. The bandwidth usage patterns of users vary with time of the day, time of the year and requirements. So there is a need for dynamic allocation of bandwidth that satisfies the requirements of the users, manages variable usage and is consistent with administrative usage policy.Internet usage is varied and in the context of an institution or organization an administrator would like to maximize productive usage. There is, therefore, a need to implement control access policies, which prevents unproductive use but at the same time does not, to the extent possible, impose censorship. Squid proxy server is a full-featured web proxy, which increases the efficiency of the Internet link by providing caching and proxy services. Squid provides many mechanisms to set access control policies. However, deciding whichpolicies to implement requires experimentation and usage statistics that must be processed to obtain useful data. The proposed architecture elaborated in this paper is based on machine learning to determine policies depending on the content of current URLs being visited. The main component in this architecture is the Squid traffic Analyzer, which classifies the traffic and generates URL lists. These URL lists are used in formulating access policies. The concept of delay priority will also be introduced which gives more options to system administrators in setting policies for bandwidth management. As Squid allows HTTP tunneling, it forms a loophole for strict policy management. In this paper the proxy tunneling in Squid has also been considered and some possible solutions to this problem will also be suggested.
Contact and support
Terms and conditions
Frequently asked questions
Copyright © 2015
All rights reserved. SciArchives.