Showing posts with label IEEE projects. Show all posts
Showing posts with label IEEE projects. Show all posts

Sunday, September 23, 2012

Project abstract - Optimizing Search Engines using Clickthrough Data

This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples.

Download

Project abstract -Mining Sequential Patterns

We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction. 1 Introduction Database mining is motivated by the decision support problem faced by most large retail organizations. Progress in bar-code technology has made it possible.

Download

Query evaluation techniques for large databases

Database management systems will continue to manage large data volumes. Thus, efficient algorithms for accessing and manipulating large sets and sequences will be required to provide acceptable performance. The advent of object-oriented and extensible database systems will not solve this problem. On the contrary, modern data models exacerbate it: In order to manipulate large sets of complex objects as efficiently as today’s database systems manipulate simple records, query processing algorithms and software will become more complex, and a solid understanding of algorithm and architectural issues is essential for the designer of database management software. This survey provides a foundation for the design and implementation of query execution facilities in new database management systems. It describes a wide array of practical query evaluation techniques for both relational and post-relational database systems, including iterative execution of complex query evaluation plans, the duality of sort- and hash-based set matching algorithms, types of parallel query execution and their implementation, and special operators for emerging database application domains.

Download

Project abstract- The Entity-Relationship Model

A data model, called the entity-relationship model, is proposed. This model incorporates some of the important semantic information about the real world. A special diagrammatic technique is introduced as a tool for database design. An example of database design and description using the model and the diagrammatic technique is given. Some implications for data integrity, infor-mation retrieval, and data manipulation are discussed. The entity-relationship model can be used as a basis for unification of different views of data: t,he network model, the relational model, and the entity set model. Semantic ambiguities in these models are analyzed. Possible ways to derive their views of data from the entity-relationship model are presented. Key Words and Phrases: database design, logical view of data, semantics of data, data models, entity-relationship model, relational model, Data Base Task Group, network model.

Download

Project Absract- Resilient Overlay Networks

A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON’s routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 % of the transfers doubled their TCP throughput and 5 % of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.

Download

Project Abstract- Edge Detection

For both biological systems and machines, vision begins with a large and unwieldy array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene, such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step; vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clue about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processg has led to extensive research on their detection, description and .use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.

Download

Tuesday, July 26, 2011

Seminar on Real-Time Operating Systems

A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time application requests.
A key characteristic of a RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter. A hard real-time operating system has less jitter than a soft real-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. A RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS.
A real-time OS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency, but a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.

Power point presentation on real-time operating system

RTOS - Design and Implementation
6.0 INTRODUCTION TO REAL-TIME OPERATING SYSTEMS (RTOS)
Real Time Operating Systems
Real-Time Operating Sytems - Stanford
Basic Design using RTOS
Real Time Operating Systems (RTOS)

Sunday, July 17, 2011

Project on Fingerprint Verification System

We will design and implement an image recognition system to identify fingerprints based on a given database. We will begin by inputting simple images and checking that the system accurately identifies those images. As the system is developed, more complex images can be used. The final stage of the project will involve identifying an individual's fingerprint based on standard points of identification used in common practice.

This project consists of a few stages. The initial stage will involve creating a database in memory for the image comparison. The next stage will be developing an interface between the camera and a RAM to store the image that needs to be identified. Once the image has been loaded into the system, it must be processed to select the appropriate characteristics for the comparison to the database. The processed image will then be compared to the images in the database to determine the quality of the similarities. The most similar image will be selected and presented to the user interface along with the quality of the identification.

                  

The image processing will involve a series of filters in the spatial domain. There will be an edge-detection filter to sharpen the image, prior to binarization of the fingerprint. Another filter will select the unique components of the fingerprint. The database will contain the post-processed fingerprint information to minimize the size of the stored data. The database size will be limited to the memory of the labkit, which will be sufficient to demonstrate the functionality of the fingerprint matching system.

The work will be split into two components. Bashira will be responsible for interfacing the camera to the labkit, as well as managing the data storage in memory. Cheryl will implement the image processing to isolate the data for the analysis and the matching. Once the fingerprint recognition scheme is working, both team members will work to enhance the identification interface as time allows to create a visually appealing result.

 

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Source : MIT

Saturday, July 16, 2011

Nanotechnology

Nanotechnology (sometimes shortened to "nanotech") is the study of manipulating matter on an atomic and molecular scale. Generally, nanotechnology deals with structures sized between 1 to 100 nano metre in at least one dimension, and involves developing materials or devices possessing at least one dimension within that size. Quantum mechanical effects are very important at this scale, which is in the quantum realm.
Nanotechnology is very diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nano scale to investigating whether we can directly control matter on the atomic scale.
There is much debate on the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials and energy production. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nano materials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

Power Point presentation on Nanotechnology
 Introduction to Nanotechnology
Economic Impacts of Nanotechnology
Oklahoma Nanotechnology Initiative
Nanotechnology Challenges and Fears
Nanotechnology
Challenges of Nanotechnology

Seminar on web application security

A web application security scanner is program which communicates with a web application through the web front-end in order to identify potential security vulnerabilities in the web application and architectural weaknesses. It performs a black-box test. Unlike source code scanners, web application scanners don't have access to the source code and therefore detect vulnerabilities by actually performing attacks.

                                                                                                                    

A web application security scanner can facilitate the automated review of a web application with the expressed purpose of discovering security vulnerabilities, and are required to comply with various regulatory requirements. Web application scanners can look for a wide variety of vulnerabilities, including:

  • Input/Output validation: (Cross-site scripting, SQL Injection, etc.)
  • Specific application problems
  • Server configuration mistakes/errors/version

Power point presentation on web application security

Web application security
Web Application Security 1
Web Application Security - Black Hat
Introduction to Web Application Security and App
E-security solutions: Web Applications Security and challenges
Web Application Security 2
Building a Robust Web Application Security Plan
Web Application Security Whitepaper

Monday, July 4, 2011

Seminar on .NET framework

The .NET Framework (pronounced dot net) is a software framework that runs primarily on Microsoft Windows. It includes a large library and supports several programming languages which allows language interoperability (each language can use code written in other languages). The .NET library is available to all the programming languages that .NET supports. Programs written for the .NET Framework execute in a software environment (as contrasted to hardware environment), known as the Common Language Runtime (CLR), an application virtual machine that provides important services such as security, memory management, and exception handling. The class library and the CLR together constitute the .NET Framework.

              

The .NET Framework's Base Class Library provides user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. Programmers produce software by combining their own source code with the .NET Framework and other libraries. The .NET Framework is intended to be used by most new applications created for the Windows platform. Microsoft also produces a popular integrated development environment largely for .NET software called Visual Studio.

 

Power point presentation on .NET Framework

Friday, July 1, 2011

Seminar on fingerprint recognition

Fingerprint recognition or fingerprint authentication refers to the automated method of verifying a match between two human fingerprints. Fingerprints are one of many forms of biometrics used to identify individuals and verify their identity. This article touches on two major classes of algorithms (minutia and pattern) and four sensor designs (optical, ultrasonic, passive capacitance, and active capacitance).

A fingerprint sensor is an electronic device used to capture a digital image of the fingerprint pattern. The captured image is called a live scan. This live scan is digitally processed to create a biometric template (a collection of extracted features) which is stored and used for matching. This is an overview of some of the more commonly used fingerprint sensor technologies.

                     

Optical fingerprint imaging involves capturing a digital image of the print using visible light. This type of sensor is, in essence, a specialized digital camera. The top layer of the sensor, where the finger is placed, is known as the touch surface. Beneath this layer is a light-emitting phosphor layer which illuminates the surface of the finger. The light reflected from the finger passes through the phosphor layer to an array of solid state pixels (a charge-coupled device) which captures a visual image of the fingerprint. A scratched or dirty touch surface can cause a bad image of the fingerprint. A disadvantage of this type of sensor is the fact that the imaging capabilities are affected by the quality of skin on the finger. For instance, a dirty or marked finger is difficult to image properly. Also, it is possible for an individual to erode the outer layer of skin on the fingertips to the point where the fingerprint is no longer visible. It can also be easily fooled by an image of a fingerprint if not coupled with a "live finger" detector. However, unlike capacitive sensors, this sensor technology is not susceptible to electrostatic discharge damage.

Power point Presentation on Fingerprint recognition

Thursday, June 30, 2011

Project Abstract - E-Learning

E-learning comprises all forms of electronically supported learning and teaching. The information and communication systems, whether networked learning or not, serve as specific media to implement the learning process.[1] The term will still most likely be utilized to reference out-of-classroom and in-classroom educational experiences via technology, even as advances continue in regard to devices and curriculum.

              

E-learning is essentially the computer and network-enabled transfer of skills and knowledge. E-learning applications and processes include Web-based learning, computer-based learning, virtual education opportunities and digital collaboration. Content is delivered via the Internet, intranet/extranet, audio or video tape, satellite TV, and CD-ROM. It can be self-paced or instructor-led and includes media in the form of text, image, animation, streaming video and audio.

Abbreviations like CBT (Computer-Based Training), IBT (Internet-Based Training) or WBT (Web-Based Training) have been used as synonyms to e-learning. Today one can still find these terms being used, along with variations of e-learning such as elearning, Elearning, and eLearning. The terms will be utilized throughout this article to indicate their validity under the broader terminology of E-learning.

Power point presentation on E-Learning

Sunday, June 26, 2011

Ebook - Linux Kernel Development

Linux Kernel Development details the design and implementation of the Linux kernel, presenting the content in a manner that is beneficial to those who wish to write and develop kernel code. This book is for anyone who wants a fun, practical approach to the Linux kernel.

The author, a core kernel developer, shares valuable knowledge and experience on the very latest Linux kernel.

 

      

 

The book discusses the major subsystems and features of the Linux kernel, including their design and implementation, their purpose and goals, and their interfaces. Specific topics covered include: process management, scheduling, time management and timers, system call interface, memory addressing and management, caching layers, VFS, kernel synchronization, debugging, and the kernel community.

The book covers the new 2.6 Linux kernel, and includes numerous sections on its new features, such as the new O(1) scheduler, the new I/O schedulers, the new block layer, and kernel preemption.

This book is an authoritative, practical guide that helps programmers better understand the Linux kernel, and to write and develop kernel code.

 

Download Ebook

Linux Kernel Development
Linux Kernel Development

Project Abstract - Combinatorial Approach For Preventing Sql Injection Attacks

A combinatorial approach for protecting Web applications against SQL injection is discussed in this paper, which is a novel idea of incorporating the uniqueness of Signature based method and auditing method. The major issue of web application security is the SQL Injection, which can give the attackers unrestricted access to the database that underlie Web applications and has become increasingly frequent and serious. From signature based method standpoint of view, it presents a detection mode for SQL injection using pair wise sequence alignment of amino acid code formulated from web application form parameter sent via web server.

 

                  

 

On the other hand from the Auditing based method standpoint of view, it analyzes the transaction to find out the malicious access. In signature based method It uses an approach called Hirschberg algorithm, it is a divide and conquer approach to reduce the time and space complexity. This system was able to stop all of the successful attacks and did not generate any false positives.

Project Abstract - Resequencing Analysis Of Stop-And-Wait Arq For Parallel Multichannel Communications

In this paper, we consider a multichannel data communication system in which the stop-and-wait automatic-repeat request protocol for parallel channels with an in-sequence delivery guarantee (MSW-ARQ-inS) is used for error control. We evaluate the resequencing delay and the resequencing buffer occupancy, respectively. Under the assumption that all channels have the same transmission rate but possibly different time-invariant error rates, we derive the probability generating function of the resequencing buffer occupancy and the probability mass function of the resequencing delay. Then, by assuming the Gilbert–Elliott model for each channel, we extend our analysis to time-varying channels. Through examples, we compute the probability mass functions of the resequencing  buffer occupancy and the resequencing delay for time-invariant channels. From numerical and simulation results, we analyze trends in the mean resequencing buffer occupancy and the mean resequencing delay as functions of system parameters. We expect that the modeling technique and analytical approach used in this paper can be applied to the performance evaluation of other ARQ protocols (e.g., the selective-repeat ARQ) over multiple time-varying channels. Index Terms—In-sequence delivery, modeling and performance, multichannel data communications, resequencing buffer occupancy, resequencing delay, SW-ARQ.

Project Abstract - Cell Breathing Techniques For Load Balancing In Wireless Lans

Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs). This goal is typically achieved when the load of access points (APs) is balanced. Recent studies on operational WLANs, however, have shown that AP load is often substantially uneven. To alleviate such imbalance of load, several load balancing schemes have been proposed. These schemes commonly require proprietary software or hardware at the user side for controlling the user-AP association. In this paper we present a new load balancing technique by controlling the size of WLAN cells (i.e., AP’s coverage range), which is conceptually similar to cell breathing in cellular networks.

 

         

 

The proposed scheme does not require any modification to the users neither the IEEE 802.11 standard. It only requires the ability of dynamically changing the transmission power of the AP beacon messages. We develop a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. We also consider the problem of network-wide min-max load balancing. Simulation results show that the performance of the proposed method is comparable with or superior to the best existing association-based methods.

Project Abstract - Greedy Routing With Anti-Void Traversal For Wireless Sensor Networks

The unreachability problem (i.e., the so-called void problem) that exists in the greedy routing algorithms has been studied for the wireless sensor networks. Some of the current research work cannot fully resolve the void problem, while there exist other schemes that can guarantee the delivery of packets with the excessive consumption of control overheads. In this paper, a greedy antivoid routing (GAR) protocol is proposed to solve the void problem with increased routing efficiency by exploiting the boundary finding technique for the unit disk graph (UDG). The proposed rolling-ball UDG boundary traversal (RUT) is employed to completely guarantee the delivery of packets from the source to the destination node under the UDG network. The boundary map (BM) and the indirect map searching (IMS) scheme are proposed as efficient algorithms for the realization of the RUT technique.

 

 

Moreover, the hop count reduction (HCR) scheme is utilized as a short-cutting technique to reduce the routing hops by listening to the neighbor’s traffic, while the intersection navigation (IN) mechanism is proposed to obtain the best rolling direction for boundary traversal with the adoption of shortest path criterion. In order to maintain the network requirement of the proposed RUT scheme under the non-UDG networks, the partial UDG construction (PUC) mechanism is proposed to transform the non-UDG into UDG setting for a portion of nodes that facilitate boundary traversal. These three schemes are incorporated within the GAR protocol to further enhance the routing performance with reduced communication overhead. The proofs of correctness for the GAR scheme are also given in this paper. Comparing with the existing localized routing algorithms, the simulation results show that the proposed GAR-based protocols can provide better routing efficiency.

 

Project Abstract - Route Stability In Manets Under The Random Direction Mobility Model

A fundamental issue arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two nodes. A method that has been advocated to improve routing efficiency is to select the most stable path so as to reduce the latency and the overhead due to route reconstruction.

 

 

In this work, we study both the availability and the duration probability of a routing path that is subject to link failures caused by node mobility. In particular, we focus on the case where the network nodes move according to the Random Direction model, and we derive both exact and approximate (but simple) expressions of these probabilities. Through our results, we study the problem of selecting an optimal route in terms of path availability. Finally, we propose an approach to improve the efficiency of reactive routing protocols.

Presentation on the topic mobile ad hoc networks

Tabu Search Algorithm For Cluster Building In Wireless Sensor Networks – Project Abtracts

The main challenge in wireless sensor network deployment pertains to optimizing energy consumption when collecting data from sensor nodes. This paper proposes a new centralized clustering method for a data collection mechanism in wireless sensor networks, which is based on network energy maps and Quality-of-Service (QoS) requirements. The clustering problem is modeled as a hypergraph partitioning and its resolution is based on a tabu search heuristic.

 

 

Our approach defines moves using largest size cliques in a feasibility cluster graph. Compared to other methods (CPLEX-based method, distributed method, simulated annealing-based method), the results show that our tabu search-based approach returns high-quality solutions in terms of cluster cost and execution time. As a result, this approach is suitable for handling network extensibility in a satisfactory manner.