Data Structures and Algorithms

1405 Submissions

[25] viXra:1405.0352 [pdf] replaced on 2014-06-06 08:25:56

On Information Hiding

Authors: José Francisco García Juliá
Comments: 3 Pages.

Information hiding is not programming hiding. It is the hiding of changeable information into programming modules.
Category: Data Structures and Algorithms

[24] viXra:1405.0312 [pdf] replaced on 2014-09-21 05:47:47

Transport Catastrophe Analysis as an Alternative to a Monofractal Description: Theory and Application to Financial Time Series

Authors: Sergey A. Kamenshchikov
Comments: 12 Pages. Journal of Chaos, Volume 2014, Article ID 346743. Author: ru.linkedin.com/pub/sergey-kamenshchikov/60/8b1/21a/

The goal of this investigation was to overcome limitations of a persistency analysis, introduced by Benoit Mandelbrot for monofractal Brownian processes: nondifferentiability, Brownian nature of process and a linear memory measure. We have extended a sense of a Hurst factor by consideration of a phase diffusion power law. It was shown that pre-catastrophic stabilization as an indicator of bifurcation leads to a new minimum of momentary phase diffusion, while bifurcation causes an increase of the momentary transport. An efficiency of a diffusive analysis has been experimentally compared to the Reynolds stability model application. An extended Reynolds parameter has been introduces as an indicator of phase transition. A combination of diffusive and Reynolds analysis has been applied for a description of a time series of Dow Jones Industrial weekly prices for a world financial crisis of 2007-2009. Diffusive and Reynolds parameters shown an extreme values in October 2008 when a mortgage crisis was fixed. A combined R/D description allowed distinguishing of market evolution short-memory and long memory shifts. It was stated that a systematic large scale failure of a financial system has begun in October 2008 and started fading in February 2009.
Category: Data Structures and Algorithms

[23] viXra:1405.0101 [pdf] submitted on 2014-05-07 03:58:11

Conversion of P-Type to N-Type Conductivity in Zno Thin Films by Increasing Temperature

Authors: Trilok Kumar Pathak, Prabha Singh, L.P.Purohit
Comments: 10 Pages.

ZnO thin films with the thickness of about 15nm on (0001) substrates were prepared by pulsed laser deposition. X-ray photoelectron spectroscopy indicated that both as grown and then annealed ZnO thin films were oxygen rich. Hydrogen (H2) sensing measurements of the films indicated that the conductivity type of both the unannealed and annealed ZnO films converted from p-type to n-type in process of increasing the operating temperature. However, the two films showed different conversion temperatures. The origin of the p-type conductivity in the unannealed and annealed ZnO films should be attributed to oxygen related defects and Zinc vacancies related defects, respectively. The conversion of the conductivity type was due to the annealing out of the correlated defects. Moreover, p-type ZnO films can work at lower temperature than n-type ZnO films without obvious sensitivity loss.
Category: Data Structures and Algorithms

[22] viXra:1405.0099 [pdf] submitted on 2014-05-07 04:02:40

Hierarchical Importance Indices Based Approach for Reliability Redundancy Optimization of Flow Networks

Authors: Kumar Pardeep
Comments: 15 Pages.

In flow networks, it is assumed that a reliability model representing telecommunications networks is independent of topological information, but depends on traffic path attributes like delay, reliability and capacity etc.. The performance of such networks from quality of service point of view is the measure of its flow capacity which can satisfy the customers demand. To design such flow networks, hierarchical importance indices based approach for reliability redundancy optimization using composite performance measure integrating reliability and capacity has been proposed. The method utilizes cardinality and other hierarchical importance indices based criterion in selecting flow paths and backup paths to optimize them. The algorithm is reasonably efficient due to reduced computation work even for large telecommunication networks.
Category: Data Structures and Algorithms

[21] viXra:1405.0057 [pdf] submitted on 2014-05-06 23:30:29

Design & Simulation of 128x Interpolator Filter

Authors: Rahul Sinha, A. Sonika
Comments: 10 Pages.

This paper presents the design consideration and simulation of interpolator of OSR 128. The proposed structure uses the half band filers & Comb/Sinc filter. Experimental result shows that proposed interpolator achieves the design specification, and also has good noise rejection capabilities. The interpolator accepts the input at 44.1 kHz for applications like CD & DVD audio. The interpolation filter can be applied to the delta sigma DAC. The related work is done with the MATLAB & XILINX ISE simulators. The maximum operating frequency is achieved as 34.584 MHz.
Category: Data Structures and Algorithms

[20] viXra:1405.0056 [pdf] submitted on 2014-05-06 23:35:59

A New Optimization of Noise Transfer Function of Sigma-delta-modulator with Supposition Loop Filter Stability

Authors: Saman Kaedi, Ebrahim Farshidii
Comments: 15 Pages.

In this paper a discrete time sigma-delta ADC with new assumptions in optimization of noise transfer function (NTF) is presented, that improve SNR and accuracy of ADC. Zeros and poles of sigma-delta’s loop filter is optimized and located by genetic algorithm with assumption loop filter stability and final quantization noise density of modulator will be significantly decrease. Supposition density of quantization noise as default of optimization result without need to additional circuit or filter, the folded noise in pass band due to down sampling, has been minimized so SNR will be more increase. The circuit is designed and implemented using MATLAB. The simulator result of sigma-delta ADC demonstrates this methodology has 7db (equivalent more than 1bit) improvement in SNR.
Category: Data Structures and Algorithms

[19] viXra:1405.0055 [pdf] submitted on 2014-05-06 23:37:04

Analysis of Relative Importance of Data Quality Dimensions for Distributed Systems

Authors: Gopalkrishna Joshi, Narasimha H Ayachit, Kamakshi Prasad
Comments: 13 Pages.

The Increasing complexity of the processes and their distributed nature in enterprises is resulting in generation of data that is both huge and complex. And data quality is playing an important role as decision making in enterprises is dependent on the data. This data quality is a multidimensional concept. However, there does not exist a commonly accepted set of the dimensions and analysis of data quality in the literature by the concerned. Further, all the dimensions available in literature may not be of relevance in a particular context of information system and not all of these dimensions may enjoy the same importance in a context. Practitioners in the field choose dimensions of data quality based on intuitive understanding, industrial experience or literature review. There does not exist a rigorously defined mechanism of choosing appropriate dimensions for an information system under consideration in a particular context. In this paper, the authors propose a novel method of choosing appropriate dimensions of data quality for an information system bringing in the perspective of data consumer. This method is based on Analytic Hierarchic Process (AHP) popularly used in multi-criterion decision making and the demonstration of the same is done in the context of distributed information systems
Category: Data Structures and Algorithms

[18] viXra:1405.0054 [pdf] submitted on 2014-05-06 23:38:30

.Concurrent Adaptive Cancellation of Quantization Noise and Harmonic Distortion in Sigma–Delta Converter

Authors: Hamid Mohseni Pour, Ebrahim Farshidi
Comments: 10 Pages.

Adaptive noise cancellation (ANC) technique can removes thermal and shaped wideband quantization noise from the output of sigma-delta modulator and improves SNR and SFDR ratios. ANC filter more than desired signal passes harmonics of input signal caused by analog element such as operational amplifier of the integrator without any suppression and this issue causes less increment in SNR and SFDR of analog to digital converter. This paper presents a technique by adding an adaptive harmonic canceller filter in the front of ANC filter addresses this issue and improves considerably performance of the ADC. The simulation results demonstrate effectiveness of this combination technique in first order sigma-delta converter.
Category: Data Structures and Algorithms

[17] viXra:1405.0051 [pdf] submitted on 2014-05-07 01:24:58

An Investigation on Project Management Standard Practices in IT Organization

Authors: Pecimuthu Gopalasamy, Zulkefli Mansor
Comments: 12 Pages.

In many organizations, project management is no longer a separately identified function, but is entrenched in the overall management of the business. The typical project management environment has become a multi - project. Most of the project decisions require consideration of schedule, resource and cost concerns on other project work, necessitating the review and evaluation of multi-project data. Without good project management standard practices the organization very hard to reach their target. The research problem of this study is to assess how project management standard practices in the IT Organizations are using it. The research method employed was to first identify the best practices of project management, by focusing on generally accepted standards and practices are particularly effective in helping an organization achieve its objectives. It also requires the ability to manage projects in today’s complex, fast-changing organizations, its people, processes and operating systems which all work together in a collaborative, integrated fashion.
Category: Data Structures and Algorithms

[16] viXra:1405.0050 [pdf] submitted on 2014-05-07 01:31:32

Software Maintenance of Deployed Wireless Sensor Nodes for Structural Health Monitoring Systems

Authors: S.A.Quadri, Othman Sidek
Comments: 28 Pages.

The decreasing cost of sensors is resulting in an increase in the use of wireless sensor networks for structural health monitoring. In most applications, nodes are deployed once and are supposed to operate unattended for a long period of time. Due to the deployment of a large number of sensor nodes, it is not uncommon for sensor nodes to become faulty and unreliable. Faults may arise from hardware or software failure. Software failure causes non-deterministic behavior of the node, thus resulting in the acquisition of inaccurate data. Consequently, there exists a need to modify the system software and correct the faults in a wireless sensor node (WSN) network. Once the nodes are deployed, it is impractical at best to reach each individual node. Moreover, it is highly cumbersome to detach the sensor node and attach data transfer cables for software updates. Over-the-air programming is a fundamental service that serves this purpose. This paper discusses maintenance issues related to software for sensor nodes deployed for monitoring structural health and provides a comparison of various protocols developed for reprogramming.
Category: Data Structures and Algorithms

[15] viXra:1405.0049 [pdf] submitted on 2014-05-07 01:32:37

A Study of Information Security in E- Commerce Applications

Authors: Mohammed Ali Hussain
Comments: 9 Pages.

Electronic Commerce (Ecommerce) refers to the buying and selling of goods and services via electronic channels, primarily the Internet. The applications of E- commerce includes online book store, e- banking, online ticket reservation(railway, airway, movie, etc.,), buying and selling goods, online funds transfer and so on. During E commerce transactions, confidential information is stored in databases as well communicated through network channels. So security is the main concern in E commerce. E commerce applications are vulnerable to various security threats. This results in the loss of consumer confidence. So we need security tools to counter such security threats. This paper presents an overview of security threats to E commerce applications and the technologies to counter them.
Category: Data Structures and Algorithms

[14] viXra:1405.0048 [pdf] submitted on 2014-05-07 01:34:19

Minimizing Clock Power Wastage By Using Conditional Pulse Enhancement Scheme

Authors: A.saisudheer, V. Murali Praveen, S.jhansi Lakshmi
Comments: 6 Pages.

In this paper, a low-power pulse-triggered flip-flop (FF) designed and a simple two-transistor AND gate is designed to reduce the circuit complexity. Second, a conditional pulse-enhancement technique is devised to speed up the discharge along the critical path only when needed. As a result, transistor sizes in delay inverter and pulsegeneration circuit can be reduced for power saving. Various post layout simulation results based on UMC CMOS 50-nm technology reveal that the proposed design features the best power-delay-product performance in several FF designs under comparison. Its maximum power saving against rival designs is up to 18.2% and the average leakage power consumption is also reduced by a factor of 1.52
Category: Data Structures and Algorithms

[13] viXra:1405.0047 [pdf] submitted on 2014-05-07 01:35:18

SSBD: Single Side Buffered Deflection Router for On-Chip Networks

Authors: V.Sankaraiah, V.Murali Praveen
Comments: 6 Pages.

As technology scaling drives the no.of processors upward, current on-chip routers consume substantial portions of chip area, performance, cost & power budgets. Recent work proposes to apply well-known routing technique, which eliminate buffers & hence buffers power (static & dynamic) at the cost of some misrouting or deflection called bufferless deflection routing. While bufferless NoC design has shown promising area and power reductions and offers similar performance to conventional buffered for many workloads. Such design provides lower throughput, unnecessary networkhops and wasting power at high network loads. To address this issue we propose an innovative NoC router design called Single Side Buffered Defection (SSBD)router. Compared to previous bufferless deflection router SSBD contributes (i) a router microarchitecture with a double-width ejection path and enhanced arbitration with in-router prioritization. (ii)small side buffers to hold some traffic that would have otherwise been deflected.
Category: Data Structures and Algorithms

[12] viXra:1405.0046 [pdf] submitted on 2014-05-07 01:36:50

Information & Communication Technology for Improving Livelihoods of Tribal Community in India

Authors: Vinay Kumar, Abhishek Bansal
Comments: 9 Pages.

Development level of a society is a measure of how efficiently the society is harnessing the benefits of different developmental and welfare programs initiated by the government of the day. Tribal in India have been deprived of opportunities because of many factors. One of the important factor is unavailability of suitable infrastructure for the development plan to reach to them. It is widely acknowledged that Information and Communication Technologies (ICTs) have potential to play a vital role in social development. Several projects have attempted to adopt these technologies to improve the reach, enhance the coverage base by minimizing the processing costs and reducing the traditional cycles of output deliverables. ICTs can be used to strengthen and develop the information systems of development plans exclusively for tribal and thereby improving effective monitoring of implementation. The paper attempts to highlight the effectiveness of ICT in improving livelihood of tribals in India.
Category: Data Structures and Algorithms

[11] viXra:1405.0045 [pdf] submitted on 2014-05-07 01:37:39

Implementation of Distributed Canny Edge Detector on FPGA

Authors: T.Rupalatha, G.Rajesh, K.Nandakumar
Comments: 7 Pages.

Edge detection is one of the basic operation carried out in image processing and object identification .In this paper, we present a distributed Canny edge detection algorithm that results in significantly reduced memory requirements, decreased latency and increased throughput with no loss in edge detection performance as compared to the original Canny algorithm. The new algorithm uses a low-complexity 8-bin non-uniform gradient magnitude histogram to compute block-based hysteresis thresholds that are used by the Canny edge detector. Furthermore, an FPGA-based hardware architecture of our proposed algorithm is presented in this paper and the architecture is synthesized on the Xilinx Spartan-3E FPGA. Simulation results are presented to illustrate the performance of the proposed distributed Canny edge detector. The FPGA simulation results show that we can process a 512×512 image in 0.28ms at a clock rate of 100 MHz.
Category: Data Structures and Algorithms

[10] viXra:1405.0044 [pdf] submitted on 2014-05-07 01:38:30

Enhanced Face Recognition System Combining PCA, LDA, ICA with Wavelet Packets and Curvelets

Authors: N.Nallammal, V.Radha
Comments: 11 Pages.

Face recognition is one of the most frequently used biometrics both in commercial and law enforcement applications. The individuality of facial recognition from other biometric techniques is that it can be used for surveillance purposes; as in searching for wanted criminals, suspected terrorists, and missing children. The steps in a face recognition steps are preprocessing (image enhancement), feature extraction and finally recognition. This paper identifies techniques in each step of the recognition process to improve the overall performance of face recognition. The proposed face recognition model combines enhanced 2DPCA algorithm, LDA, ICA with wavelet packets and curvelets and experimental results proves that the combination of these techniques increases the efficiency of the recognition process and improves the existing systems.
Category: Data Structures and Algorithms

[9] viXra:1405.0043 [pdf] submitted on 2014-05-07 01:39:33

An Efficient Carry Select Adder with Reduced Area Application

Authors: Ch. Pallavi, V.swathi
Comments: 7 Pages.

Design of area, high speed and power-efficient data path logic systems forms the largest areas of research in VLSI system design. In digital adders, the speed of addition is limited by the time required to transmit a carry through the adder. Carry Select Adder (CSLA) is one of the fastest adders used in many data-processing processors to perform fast arithmetic functions. From the structure of the CSLA, it is clear that there is scope for reducing the area and delay in the CSLA. This work uses a simple and an efficient gate-level modification (in regular structure) which drastically reduces the area and delay of the CSLA. Based on this modification 8, 16, 32, and 64-bit square-root Carry Select Adder (SQRT CSLA) architectures have been developed and compared with the regular SQRT CSLA architecture. The proposed design has reduced area and delay to a great extent when compared with the regular SQRT CSLA. This work estimates the performance of the proposed designs with the regular designs in terms of delay; area and synthesis are implemented in Xilinx FPGA. The results analysis shows that the proposed SQRT CSLA structure is better than the regular SQRT CSLA.
Category: Data Structures and Algorithms

[8] viXra:1405.0042 [pdf] submitted on 2014-05-07 01:40:47

A Technique of Image Compression Based on Discrete Wavelet Image Decomposition and Self Organizing Map

Authors: Megha Sharma, Rashmi Kuamri
Comments: 12 Pages.

Image compression is the growing research area for the real world applications which is spreading day by day by the explosive growth of image transmission and storage. This paper presents the algorithm for gray scale image compression using self organizing map (SOM) and discrete wavelet transform (DWT). Self organizing map network is trained with input patterns in the form of vectors which gives code vector (weight matrix) and index values as the output. The discrete wavelet transform is applied on the code vectors and storing only the approximation coefficients (LL) and the index values obtained from the self organizing map. The result obtained shows the better compression ratio as well as better peak signal to noise ratio (PSNR) in comparison with the existing techniques.
Category: Data Structures and Algorithms

[7] viXra:1405.0041 [pdf] submitted on 2014-05-07 01:42:04

Adaptive Duty-Cycle-Aware Using Multihopping in WSN

Authors: J. V. Shiral, J. S. Zade, K. R. Bhakare, N. Gandhewar
Comments: 15 Pages.

A wireless sensor network consists of group of sensors, or nodes, that are linked by a wireless medium to perform distributed sensing tasks. The sensors are assumed to have a fixed communication and a fixed sensing range, which can significantly vary depending on the type of sensing performed. Duty cycle is the ratio of active time i.e the time at which the particular set of nodes are active to the whole scheduling time. With duty cycling, each node alternates between active and sleeping states, leaving its radio powered off most of the time and turning it on only periodically for short periods of time. In this paper, an ADB protocol is used to manage and control duty cycles as well as regulate , monitor on going traffic among the nodes by using adaptive scheduling. Thus congestion, delay can be controlled and efficiency and performance of overall network can be improved.
Category: Data Structures and Algorithms

[6] viXra:1405.0040 [pdf] submitted on 2014-05-07 01:43:45

Memory Centered Recognition of Fir Numerical Filter by Lut Optimization

Authors: A. Saisudheer
Comments: 12 Pages.

Finite impulse response (FIR) digital filter is widely used in signal processing and image processing applications. Distributed arithmetic (DA)-based computation is popular for its potential for efficient memory-based implementation of finite impulse response (FIR) filter where the filter outputs are computed as inner-product of input-sample vectors and filter-coefficient vector. In this paper, however ,we show that the look-up-table(LUT)-multiplier-based approach, where the memory elements store all the possible values of products of the filter coefficients could be an area-efficient alternative to DA-based design of FIR filter with the same throughput of implementation.
Category: Data Structures and Algorithms

[5] viXra:1405.0039 [pdf] submitted on 2014-05-07 01:44:44

Transmission of Image using DWT-OFDM System with Channel State Feedback

Authors: Lakshmi Pujitha Dachuri
Comments: 16 Pages.

In many applications retransmissions of lost packets are not permitted .OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms. In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels. such that poorer sub-channels can only affect the lesser important data vectors .we consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.
Category: Data Structures and Algorithms

[4] viXra:1405.0038 [pdf] submitted on 2014-05-07 01:53:06

Object Tracking System Using Stratix FPGA

Authors: A. Saisudheer
Comments: 9 Pages.

Object tracking is an important task in computer vision applications. One of the crucial challenges is the real time speed requirement. In this paper we implement an object tracking system in reconfigurable hardware using an efficient parallel architecture. In our implementation, we adopt a background subtraction based algorithm. The designed object tracker exploits hardware parallelism to achieve high system speed. We also propose a dual object region search technique to further boost the performance of our system under complex tracking conditions. For our hardware implementation we use the Altera Stratix III EP3SL340H1152C2 FPGA device. We compare the proposed FPGA-based implementation with the software implementation running on a 2.2 GHz processor. The observed speedup can reach more than 100X for complex video inputs.
Category: Data Structures and Algorithms

[3] viXra:1405.0037 [pdf] submitted on 2014-05-07 01:55:43

Smart Phone as Software Token for Generating Digital Signature Code for Signing In Online Banking Transaction

Authors: A. Saisudheer
Comments: 4 Pages.

Nowadays, Online banking security mechanisms focus on safe authentication mechanisms, but all these mechanisms are rendered useless if we are unable to ensure the integrity of the transactions made. Of late a new threat has emerged known as Man in the Browser attack, it’s capable of modifying a transaction in real time without the user’s notice, after the user has successfully logged in using safe authentication mechanisms. In this paper we analyze the Man in the Browser attack and propose a solution based upon digitally signing a transaction and using the mobile phones as a software token for Digital Signature code generation. Two factor authentication solutions like smartcards, hardware tokens, One Time Password’s or PKI have long been considered sufficient protection against identity theft techniques. However, since the MITB attack piggybacks on authenticated sessions rather than trying to steal or impersonate an identity, most authentication technologies are incapable of preventing its success. In this paper we take a brief look into how the MITB attack takes place how it is capable of modifying an online transaction. We propose a solution based on using mobile phones as software token for Digital signature code generation. Digital signature is known to ensure the authenticity and integrity of a transaction. Mobile phones have become a daily part of our life, thus we can use the mobile phone as software token to generate Digital Signature code.
Category: Data Structures and Algorithms

[2] viXra:1405.0036 [pdf] submitted on 2014-05-07 01:56:28

Facial Expression Recognition System by Using AFERS System

Authors: A. Saisudheer
Comments: 7 Pages.

Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop "non-intrusive" technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.
Category: Data Structures and Algorithms

[1] viXra:1405.0035 [pdf] submitted on 2014-05-07 01:58:08

An Effective GLCM and Binary Pattern Schemes Based Classification for Rotation Invariant Fabric Textures

Authors: R. Obula Konda Reddy, B. Eswara Reddy, E. Keshava Reddy
Comments: 16 Pages.

Textures are one of the basic features in visual searching, computational vision and also a general property of any surface having ambiguity. This paper presents a novel texture classification system which has a high tolerance against illumination variation. A Gray Level Co-occurrence Matrix (GLCM) and binary pattern based automated similarity identification and defect detection model is presented. Different features are calculated from both GLCM and binary patterns (LBP, LLBP, and SLBP). Then a new rotation-invariant, scale invariant steerable decomposition filter is applied to filter the four orientation sub bands of the image. The experimental results are evaluated and a comparative analysis has been performed for the four different feature types. Finally, the texture is classified by different classifiers (PNN, KNN and SVM) and the classification performance of each classifier is compared. The experimental results have shown that the proposed method produces more accuracy and better classification rate over other methods.
Category: Data Structures and Algorithms