Sunday, September 23, 2012

SET Based Approach to Secure the Payment in Mobile Commerce

In this paper we propose an approach, combining the SET protocol with the TLS/WTLS protocols in order to enforce the security services over the WAP 1.X for the payment in the m-commerce. We propose to implement the additional services of the SET protocol as the confidentiality of the payment information between the buyer and the payment gateway and the data integrity. However, we use WTLS certificates instead of the SET certificates. This allows to avoid the SET certification heaviness. Moreover, this approach eliminates the “WAP gap ” since the payment information would not be decrypted within the WAP gateway nor within the seller side.

Download

Project abstract - Optimizing Search Engines using Clickthrough Data

This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples.

Download

Project abstract -Mining Sequential Patterns

We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction. 1 Introduction Database mining is motivated by the decision support problem faced by most large retail organizations. Progress in bar-code technology has made it possible.

Download

Project abstract - Developing Custom Intrusion Detection Filters Using Data Mining

One aspect of constructing secure networks is identifying unauthorized use of those networks. Intrusion Detection systems look for unusual or suspicious activity, such as pattems of network trafic that are likely indicators of unauthorized activity. However, normal operation often produces trafic that matches likely “attack signatures”, resulting in false alarms. We are using data mining techniques to identify sequences of alarms that likely result from normal behavior, enabling construction of filters to eliminate those alarms. This can be done at low cost for specific environments, enabling the construction of customized intrusion detection filters. We present our approach, and preliminary results identifying common sequences in alarms from a particular environment.

download

Seminar on Distributed Database Systems

A distributed database is a database in which storage devices are not all attached to a common processing unit such as the CPU. It may be stored in multiple computers located in the same physical location, or may be dispersed over a network of interconnected computers. Unlike parallel systems, in which the processors are tightly coupled and constitute a single database system, a distributed database system consists of loosely coupled sites that share no physical components.

Collections of data (e.g. in a database) can be distributed across multiple physical locations. A distributed database can reside on network servers on the Internet, on corporate intranets or extranets, or on other company networks. The replication and distribution of databases improves database performance at end-user worksites. 

To ensure that the distributive databases are up to date and current, there are two processes: replication and duplication. Replication involves using specialized software that looks for changes in the distributive database. Once the changes have been identified, the replication process makes all the databases look the same. The replication process can be very complex and time consuming depending on the size and number of the distributive databases. This process can also require a lot of time and computer resources. Duplication on the other hand is not as complicated. It basically identifies one database as a master and then duplicates that database. The duplication process is normally done at a set time after hours. This is to ensure that each distributed location has the same data. In the duplication process, changes to the master database only are allowed. This is to ensure that local data will not be overwritten. Both of the processes can keep the data current in all distributive locations.

Reference

Distributed Databases

Distributed Database System - DUET

Database Management Systems

Distributed Database Systems

Distributed Databases - Prentice Hall

Query evaluation techniques for large databases

Database management systems will continue to manage large data volumes. Thus, efficient algorithms for accessing and manipulating large sets and sequences will be required to provide acceptable performance. The advent of object-oriented and extensible database systems will not solve this problem. On the contrary, modern data models exacerbate it: In order to manipulate large sets of complex objects as efficiently as today’s database systems manipulate simple records, query processing algorithms and software will become more complex, and a solid understanding of algorithm and architectural issues is essential for the designer of database management software. This survey provides a foundation for the design and implementation of query execution facilities in new database management systems. It describes a wide array of practical query evaluation techniques for both relational and post-relational database systems, including iterative execution of complex query evaluation plans, the duality of sort- and hash-based set matching algorithms, types of parallel query execution and their implementation, and special operators for emerging database application domains.

Download

Paper presentation- Parallel database systems: the future of high performance database systems

bstract: Parallel database machine architectures have evolved from the use of exotic hardware to a software parallel dataflow architecture based on conventional shared-nothing hardware. These new designs provide impressive speedup and scaleup when processing relational database queries. This paper reviews the techniques used by such systems, and surveys current commercial and research systems

download

Image Mosaicing for Tele-Reality Applications

While a large number of virtual reality applications, such as fluid flow analysis and molecular modeling, deal with simulated data, many newer applications attempt to recreate true reality as convincingly as possible. Building detailed models for such applications, which we call tele-reality, is a major bottleneck holding back their deployment. In this paper, we present techniques for automatically deriving realistic 2-D scenes and 3-D texture-mapped models from video sequences, which can help overcome this bottleneck. The fundamental technique we use is image mosaicing, i.e., the automatic alignment of multiple images into larger aggregates which are then used to represent portions of a 3-D scene. We begin with the easiest problems, those of flat scene and panoramic scene mosaicing, and progress to more complicated scenes, culminating in full 3-D models. We also present a number of novel applications based on tele-reality technology.

Download

Globus: A Metacomputing Infrastructure Toolkit

Emerging high-performance applications require the ability to exploit diverse, geographically distributed resources. These applications use high-speed networks to integrate supercomputers, large databases, archival storage devices, advanced visualization devices, and/or scientific instruments to form networked virtual supercomputers or metacomputers. While the physical infrastructure to build such systems is becoming widespread, the heterogeneous and dynamic nature of the metacomputing environment poses new challenges for developers of system software, parallel tools, and applications. In this article, we introduce Globus, a system that we are developing to address these challenges. The Globus system is intended to achieve a vertically integrated treatment of application, middleware, and network. A low-level toolkit provides basic mechanisms such as communication, authentication, network information, and data access. These mechanisms are used to construct various higher-level metacomp

Download

U-Net: A User-Level Network Interface for Parallel and Distributed Computing

The U-Net communication architecture provides processes with a virtual view of a network interface to enable userlevel access to high-speed communication devices. The architecture, implemented on standard workstations using offthe-shelf ATM communication hardware, removes the kernel from the communication path, while still providing full protection. The model presented by U-Net allows for the construction of protocols at user level whose performance is only limited by the capabilities of network. The architecture is extremely flexible in the sense that traditional protocols like TCP and UDP, as well as novel abstractions like Active Messages can be implemented efficiently. A U-Net prototype on an 8-node ATM cluster of standard workstations offers 65 microseconds round-trip latency and 15 Mbytes/sec bandwidth. It achieves TCP performance at maximum network bandwidth and demonstrates performance equivalent to Meiko CS-2 and TMC CM-5 supercomputers on a set of Split-C benchmarks

Download

Energy-efficient communication protocol for wireless microsensor networks

Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.

Download

Project abstract- The Entity-Relationship Model

A data model, called the entity-relationship model, is proposed. This model incorporates some of the important semantic information about the real world. A special diagrammatic technique is introduced as a tool for database design. An example of database design and description using the model and the diagrammatic technique is given. Some implications for data integrity, infor-mation retrieval, and data manipulation are discussed. The entity-relationship model can be used as a basis for unification of different views of data: t,he network model, the relational model, and the entity set model. Semantic ambiguities in these models are analyzed. Possible ways to derive their views of data from the entity-relationship model are presented. Key Words and Phrases: database design, logical view of data, semantics of data, data models, entity-relationship model, relational model, Data Base Task Group, network model.

Download

Project abstract - Freenet: A Distributed Anonymous Information Storage and Retrieval System

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and difficult for a node operator to determine or be held responsible for the actual physical contents of her own node.

Download

Project Absract- Resilient Overlay Networks

A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON’s routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 % of the transfers doubled their TCP throughput and 5 % of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.

Download

Paper presentation- Web based Information visualization

Information visualization, an emerging discipline,uses visual means to represent nons patial, abstract data. To visualize such information, you must map this data into a physical space. Finding the appropriate visual mapping for the task at hand proves vital to producing effective visualizations.Information visualization can often help you find and understand relationships and structure within (seemingly) unstructured data. Recent widespread interest has focused on exploration of information visualization techniques and applications for just that reason. At the same time, information has become pervasive thanks to underlying mechanisms such as the World Wide Web (WWW) and corporate intranets. Visualizing Web-based information—either from the WWW or intranets—has become a common application of information visualization. Given these trends, the Web has naturally progressed as a source of information as well as an underlying delivery mechanism for interactive information visualization. To further explore these ideas, developers use tools such as Virtual Reality Modeling Language (VRML), Java, and Web browsers such as Netscape to create Web-based information visualization applications. While a Web-based delivery mechanism offers a number of advantages, it also imposes a number of limitations and problems.

Download

Project abstract - Survey of Image Registration Techniques

Registration is a fundamental task in image processing used to match two or more pictures taken, for example, at different times, from different sensors or from different viewpoints. Over the years, a broad range of techniques have been developed for the various types of data and problems. These techniques have been independently studied for several different applications resulting in a large body of research. This paper organizes this material by establishing the relationship between the distortions in the image and the type of registration techniques which are most suitable. Two major types of distortions are distinguished. The first type are those which are the source of misregistration, i.e., they are the cause of the misalignment between the two images. Distortions which are the source of misregistration determine the transformation class which will optimally align the two images. The transformation class in turn influences the general technique that should be taken.

 

Download

Download link2

Myrinet: A Gigabit-per-Second Local Area Network

Myrinet is a new type of local-area network (LAN) based on the technology used for packet communication and switching within "massivelyparallel processors " (MPPs). Think of Myrinet as an MPP message-passing network that can span campus dimensions, rather than as a wide-area telecommunications network that is operating in close quarters. The technical steps toward making Myrinet a reality included the development of (1) robust, 25m communication channels with flow control, packet framing, and error control; (2) self-initializing, low-latency, cut-through switches; (3) host interfaces that can map the network, select routes, and translate from network addresses to routes, as well as handle packet traffic; and (4) streamlined host software that allows direct communication between user processes and the network. Background. In order to understand how Myrinet differs from conventional LANs such as Ethernet and FDDI, it is helpful to start with Myrinet's genealogy. Myrinet is rooted in the results of two ARPA-sponsored research projects, the Caltech Mosaic, an experimental, fine-grain multicomputer [1], and the USC Information Sciences Institute (USC/ISI) ATOMIC LAN [2, 3], which was built using Mosaic components. Myricom, Inc., is a startup company founded by members of these two research projects. Multicomputer Message-Passing Networks. A multicomputer [4, 5] is an MPP architecture consisting of a collection of computing nodes, each with its own memory, connected by a message-passing network. The Caltech Mosaic was an experiment to "push the envelope " of multicomputer design and programming toward a system with up to tens of thousands of small, single-chip nodes rather than hundreds of circuit-board-size nodes. The fine-grain multicomputer places more extreme demands on the messagepassing network due to the larger number of nodes and a greater interdependence between the computing processes on different nodes. The message-passing-network technology developed for the Mosaic [6] achieved its goals so well that it was used in several other MPP systems

Download

Project - HTTPTracer

HTTPTracer is an application that sits between your HTTP client and your HTTP server and sniffs all the communication that goes on between the two. You can understand what's really passing thru your HTTP connections.

This is normally useful to:

  • understand if your caching mechanisms really work
  • understand if the browser is really using keep-alive or if it's generating new TCP/IP roundtrips for every object
  • understand if your browser is using HTTP pipelining or not
  • see what HTTP headers and actions are used
  • see how the data attached is encoded
  • see if the content is really encrypted/compressed or not
  • ... and all the stuff that your HTTP clients work so hard to hide from you (which is good if you are an end user, not so good if your job is not only to make your HTTP connection work but work well!)

How do I use it?

The application should be self-explanatory: after you launch it, create a connection to an existing web site by specifying the host name and the port, and specifying what local port it should attach to (for example 8080).

After you have created the connection listener, point your browser to http://127.0.0.1:8080/ (or the port that you used) and start browsing! All the links in the pages will be rewritten automatically so that you can keep browsing as the remote web server was actually local and you can see the tracing being logged as the HTTP connection goes on.

Code download

Program download

Project Abstract- Edge Detection

For both biological systems and machines, vision begins with a large and unwieldy array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene, such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step; vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clue about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processg has led to extensive research on their detection, description and .use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.

Download

Surround-screen projection-based virtual reality: The design and implementation of the CAVE

Abstract Several common systems satisfy some but not all of the VR This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation

Download

Fisherfaces: Recognition Using Class Specific Linear Projection

We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3-D linear subspace of the high dimensional image space -- if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a low-dimensional subspace even under severe variation in lighting and facial expressions.

Download

Project – Library management System in java

An integrated library system (ILS), also known as a library management system (LMS) or as library automation, is an enterprise resource planning system for a library, used to track items owned, orders made, bills paid, and patrons who have borrowed.

An ILS usually comprises a relational database, software to interact with that database, and two graphical user interfaces (one for patrons, one for staff). Most ILSes separate software functions into discrete programs called modules, each of them integrated with a unified interface. Examples of modules might include:

  • acquisitions (ordering, receiving, and invoicing materials)
  • cataloging (classifying and indexing materials)
  • circulation (lending materials to patrons and receiving them back)
  • serials (tracking magazine and newspaper holdings)
  • the OPAC (public interface for users)

               

Each patron and item has a unique ID in the database that allows the ILS to track its activity.

Larger libraries use an ILS to order and acquire, receive and invoice, catalog, circulate, track and shelve materials. Smaller libraries, such as those in private homes or non-profit organizations (like churches or synagogues, for instance), often forgo the expense and maintenance required to run an ILS, and instead use a library computer system

 

References

http://filetram.com/mediafire/librarymmgtsystem-java-zip-8919557726

http://sourceforge.net/projects/hotelmanagement/

Project – Student management System in java

A student information system (SIS) is a software application for education establishments to manage student data. Student information systems for entering student testa school, college or university. Also known as student information management system (SIMS), student records system (SRS), student management system (SMS), campus management system (CMS) or school management system (SMS).

These systems vary in size, scope and capability, from packages that are implemented in relatively small organizations to cover student records alone, to enterprise-wide solutions that aim to cover most aspects of running large multi-campus organizations with significant local responsibility. Many systems can be scaled to different levels of functionality by purchasing add-on "modules" and can typically be configured by their home institutions to meet local needs.

Reference

Source http://lernjava.blogspot.in/2010/04/student-management-system-project-in.html

Second Open source program for reference http://sourceforge.net/p/freesms/code/76/tree/

 

 

Saturday, September 22, 2012

Seminar - Microsoft Silverlight

Microsoft Silverlight is an application framework for writing and running rich Internet applications, with features and purposes similar to those of Adobe Flash. The run-time environment for Silverlight is available as a plug-in for web browsers running under Microsoft Windows and Mac OS X. While early versions of Silverlight focused on streaming media, current versions support multimedia, graphics and animation, and give developers support for CLI languages and development tools. Silverlight is also one of the two application development platforms for Windows Phone, but Silverlight enabled web pages cannot run on Internet Explorer for Windows Phone as there is no plugin.

                           

Over the course of about five years Microsoft has released five versions: The first version was released in 2007; the latest version was released on May 8, 2012. It is compatible with multiple web browsers used on Microsoft Windows and Mac OS X operating systems, and with mobile devices using the Windows Mobile and Symbian (Series 60) platforms. A free software implementation named Moonlight, developed by Novell in cooperation with Microsoft, is available to bring Silverlight versions 1 and 2 functionality to Linux, FreeBSD and other open source platforms, although some Linux distributions do not include it, citing redistribution and patent concerns. On May 2012, Moonlight was abandoned because of the lack of popularity of Silverlight

References

ArcGIS API for Microsoft Silverlight/WPF: Advanced Topics

Building Rich Web Applications with Microsoft SilverLight
Silverlight Webservices on Wall Street

Microsoft Silverlight

Microsoft Silverlight is a cross-browser, cross-platform

Project – Music player in java

Here is another project which can be used as mini project. Its an open source music player coded in java. You can use it for reference to understand the programming structure as to how java can be used for designing applications. It will definitely help to understand the power of java. This player can be used as case study too to understand its complexity involved in desgining such application.

 

          

You DO need a J2SE 1.4 or higher (J2SE 1.5, 1.6)

http://www.javazoom.net/jlgui/sources.html

Mini Project- Chat server in Java

Online chat may refer to any kind of communication over the Internet, that offers a real-time direct transmission of text-based messages from sender to receiver, hence the delay for visual access to the sent message shall not hamper the flow of communications in any of the directions. Online chat may address point-to-point communications as well as multicast communications from one sender to many receivers and voice and video chat or may be a feature of a Web conferencing service.

Online chat in a lesser stringent definition may be primarily any direct text-based or video-based ( webcams ) , one-on-one chat or one-to-many group chat (formally also known as synchronous conferencing), using tools such as instant messengers, Internet Relay Chat, talkers and possibly MUDs. The expression online chat comes from the word chat which means "informal conversation". Online chat includes web-based applications that allow communication - often directly addressed, but anonymous - between users in a multi-user environment. Web conferencing is a more specific online service, that is often sold as a service, hosted on a web server controlled by the vendor.

Below is the source code for chat server in java. Credit for this program goes to person who designed & uploaded this for general public.

http://pirate.shu.edu/~wachsmut/Teaching/CSAS2214/Virtual/Lectures/chat-client-server.html

Reference

link1

Project abstract - Nmap

Nmap (Network Mapper) is a security scanner originally written by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich) used to discover hostsand services on a computer network, thus creating a "map" of the network. To accomplish its goal, Nmap sends specially crafted packets to the target host and then analyzes the responses.

Unlike many simple port scanners that just send packets at a predefined constant rate, Nmap accounts for the network conditions (latency fluctuations,network congestion, the target interference with the scan) during the run. Also, owing to the large and active user community providing feedback and contributing to its features, Nmap has been able to extend its discovery capabilities beyond simply figuring out whether a host is up or down and which ports are open and closed; it can determine the operating system of the target, names and versions of the listening services, estimated uptime, type of device, and presence of a firewall.

              

Nmap runs on Linux, Microsoft Windows, Solaris, HP-UX and BSD variants (including Mac OS X), and also on AmigaOS and SGI IRIX.  Linux is the most popular Nmap platform with Windows following it closely.

Reference link

Nmap - Free Security Scanner For Network Exploration & Security ...
link2
NMAP a Security Auditing Tool
Nmap Scripting Engine - scip AG
NMAP Scanning Options
Snort & Nmap

Seminar on 3D Computer Graphics

3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.

                

3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.

Reference link

link1
link2

link3

link4
link5

Seminar - computer graphics

The term computer graphics has been used in a broad sense to describe "almost everything on computers that is not text or sound”. Typically, the term computer graphics refers to several different things:

  • the representation and manipulation of image data by a computer
  • the various technologies used to create and manipulate images
  • the sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content, see study of computer graphics

Computer graphics is widespread today. Computer imagery is found on television, in newspapers, for example in weather reports, or for example in all kinds of medical investigation and surgical procedures. A well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media "such graphs are used to illustrate papers, reports, thesis", and other presentation material.

            

Many powerful tools have been developed to visualize data. Computer generated imagery can be categorized into several different types: 2D, 3D, and animated graphics. As technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still widely used. Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, and scientific visualization more concerned with "the visualization of three dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component"

Reference links

link1
link2
link3
link4
link5

Computer graphics programs can be downloaded from http://www.w3professors.com/Pages/Courses/Computer-Graphics/Programs/CG-Program.html

Project abstract on File Management

Files can also be managed based on their location on a storage device. They are stored in a storage medium in binary form. Physically, the data is placed in a not-so-well organized structure, due to fragmentation. However, the grouping of files into directories (for operating systems such as DOS, Unix, Linux) or folders (for the Mac OS and Windows) is done by changing an index of file information known as the File Allocation Table (NTFS for recent versions of Windows) or Master File Table (depending on operating system used). In this index, the physical location of a particular file on the storage medium is stored, as well as its position in the hierarchy of directories (as we see it using commands such as DIR, LS and programs such as Explorer, Finder).

On Unix/Linux machines the hierarchy is:

  • The root directory (/)
    • Directories (/usr "user" or /dev "device")
      • Sub-directories (/usr/local)
        • Files: data, devices, links, etc. (/usr/local/readme.txt or /dev/hda1, which is the hard disk device)

For DOS/Windows the hierarchy (along with examples):

  • Drive (C:)
    • Directory/Folder (C:\My Documents)
      • Sub-directory/Sub-folder (C:\My Documents\My Pictures)
        • File (C:\My Documents\My Pictures\VacationPhoto.jpg)

Commands such as:

  • Unix/Linux: cp, mv
  • DOS: copy, move
  • Windows: the Cut/Copy/Paste commands in the Edit menu of Explorer

can be used to manage (copy or move) the files to and from other directories.

Reference

File Management in C
link2
File Management - NUI Galway
Operating Systems and File Management

File Systems

File Management

Seminar on Disaster management

Emergencies, Disasters, and Catastrophes are not gradients, they are separate, distinct problems that require distinct strategies of response[citation needed]. Disasters are events distinguished from everyday emergencies by four factors: Organizations are forced into more and different kinds of interactions than normal; Organizations lose some of their normal autonomy; Performance standards change, and; More coordinated public sector/private sector relationships are required.

Catastrophes are distinct from disasters in that: Most or all of the community built structure is heavily impacted; Local officials are unable to undertake their usual work roles; Most, if not all, of the everyday community functions are sharply and simultaneously interrupted, and; Help from nearby communities cannot be provided. Assets are categorized as either living things, non-living things, cultural or economic. Hazards are categorized by their cause, either natural or human-made. The entire strategic management process is divided into four fields to aid in identification of the processes.

The four fields normally deal with risk reduction, preparing resources to respond to the hazard, responding to the actual damage caused by the hazard and limiting further damage (e.g., emergency evacuation, quarantine, mass decontamination, etc.), and returning as close as possible to the state before the hazard incident. The field occurs in both the public and private sector, sharing the same processes, but with different focuses.

References

link1
link2
link3
link4
Link5

Wednesday, September 19, 2012

Seminar on Routers

A router is a device that forwards data packets between computer networks, creating an overlay internetwork. A router is connected to two or more data lines from different networks. When a data packet comes in on one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it gets to its destination node.

                   

The most familiar type of routers are home and small office routers that simply pass data, such as web pages and email, between the home computers and the owner's cable or DSL modem, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.

Reference

Introduction to Routing and Packet Forwarding

Securing Routers Against Hackers & DoS - Net-Services
Router Architectures

Big, Fast Routers

Router Design and Optics

 

Seminar on Embedded system

An embedded system is a computer system designed for specific control functions within a larger system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a personal computer (PC), is designed to be flexible and to meet a wide range of end-user needs. Embedded systems control many devices in common use today. Embedded systems contain processing cores that are typically either microcontrollers or digital signal processors (DSP). The key characteristic, however, is being dedicated to handle a particular task. Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability and performance.

         

Some embedded systems are mass-produced, benefiting from economies of scale. Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.

References

Embedded Systems
Real-Time embedded Systems
Embedded Systems Programming - Bjarne Stroustrup
HW/SW Codesign of Embedded Systems
Introduction to Embedded Systems – Intel

Tornado: An Embedded System Development Tool

 

Seminar on Digital image processing

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.

 

Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means.

In particular, digital image processing is the only practical technology for:

  • Classification
  • Feature extraction
  • Pattern recognition
  • Projection
  • Multi-scale signal analysis

References

Digital Image Processing
Digital Image Processing

Digital Image Processing: Introduction

Digital Image Processing: Introduction
Image Processing

Seminar on Software testing

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can be stated as the process of validating and verifying that a software program/application/product: meets the requirements that guided its design and development; works as expected; can be implemented with the same characteristics. satisfies the needs of stakeholders Software testing, depending on the testing method employed, can be implemented at any time in the development process. Traditionally most of the test effort occurs after the requirements have been defined and the coding process has been completed, but in the Agile approaches most of the test effort is on-going. As such, the methodology of the test is governed by the chosen software development methodology.

 

Reference

Software Testing Strategies
Introduction to Software Testing
Principles of Software Testing
Software Testing and Analysis - SDML
Software Testing

 

Monday, September 17, 2012

Seminar on V-Model

The V-model represents a software development process (also applicable to hardware development) which may be considered an extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represents time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively. In the Requirements analysis phase, the first step in the verification process, the requirements of the proposed system are collected by analyzing the needs of the user(s).

This phase is concerned with establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the system’s functional, interface, performance, data, security, etc. requirements as expected by the user. It is used by business analysts to communicate their understanding of the system to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. See also Functional requirements. this is parallel processing There are different methods for gathering requirements of both soft and hard methodologies including; interviews, questionnaires, document analysis, observation, throw-away prototypes, use cases and status and dynamic views with users.

References

V-Model
V Model of Software Testing.ppt - Ning
Life Cycle Models
Software Process Models

Seminar on Agile software development

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle.

The Agile Manifesto introduced the term in 2001. Incremental software development methods have been traced back to 1957. In 1974, a paper by E. A. Edmonds introduced an adaptive software development process. Concurrently and independently the same methods were developed and deployed by the New York Telephone Company's Systems Development Center under the direction of Dan Gielan. In the early 1970s, Tom Gilb started publishing the concepts of Evolutionary Project Management (EVO), which has evolved into Competitive Engineering. During the mid to late 1970s Gielan lectured extensively throughout the U.S. on this methodology, its practices, and its benefits.[citation needed] So-called lightweight software development methods evolved in the mid-1990s as a reaction against heavyweight methods, which were characterized by their critics as a heavily regulated, regimented, micromanaged, waterfall model of development. Proponents of lightweight methods (and now agile methods) contend that they are a return to development practices from early in the history of software development.

References

Introduction to Agile Modeling

Agile Software Development.ppt

Agility Requirements Management
Agile methods and techniques– some method comparisons

Agile - SCRUM Methodology

Seminar on waterfall model

The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing,

Production/Implementation, and Maintenance. The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.[citation needed] The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956. This presentation was about the development of software for SAGE. In 1983 the paper was republished with a foreword by Benington pointing out that the process was not in fact performed in strict top-down, but depended on a prototype. The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.This, in fact, is how the term is generally used in writing about software development—to describe a critical view of a commonly used software development practice

 

References

Waterfall Model

Waterfall.ppt

Introduction to Software Design

Life Cycle Models

Seminar on Team building

Team building is a philosophy of job design in which employees are viewed as members of interdependent teams instead of as individual workers. Team building refers to a wide range of activities, presented to businesses, schools, sports teams, religious or nonprofit organizations designed for improving team performance. Team building is pursued via a variety of practices, and can range from simple bonding exercises to complex simulations and multi-day team building retreats designed to develop a team (including group assessment and group-dynamic games), usually falling somewhere in between. It generally sits within the theory and practice of organizational development, but can also be applied to sports teams, school groups, and other contexts. Team building is not to be confused with "team recreation" that consists of activities for teams that are strictly recreational. Team building can also be seen in day-to-day operations of an organization and team dynamic can be improved through successful leadership. Team building is an important factor in any environment, its focus is to specialize in bringing out the best in a team to ensure self development, positive communication, leadership skills and the ability to work closely together as a team to problem solve.

 

Work environments tend to focus on individuals and personal goals, with reward & recognition singling out the achievements of individual employees. Team building can also refer to the process of selecting or creating a team from scratch.

References

link1

link2

link3
link4
link5

Seminar on effective communication

All communications, intentional or unintentional, have some effect. This effect may not be always in communicator's favor or as desired by him or her. Communication that produces the desired effect or result is effective communication. It results in what the communicator wants. Effective communication generates the desired effect, maintains effect & increases effect. Effective communication serves its purpose for which it was planned or designed. The purpose could be to generate action, inform, create understanding or communicate a certain idea/point etc. Effective communication also ensures that message distortion does not take place during the communication process.

References

link1
link2
link3
link4
link5

Seminar on intelligence quotient

An intelligence quotient, or IQ, is a score derived from one of several standardized tests designed to assess intelligence. The abbreviation "IQ" comes from the German term Intelligenz-Quotient, originally coined by psychologist William Stern. When modern IQ tests are devised, the mean (average) score within an age group is set to 100 and the standard deviation (SD) almost always to 15, although this was not always so historically. Thus, the intention is that approximately 95% of the population scores within two SDs of the mean, i.e. has an IQ between 70 and 130. IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status, and, to a substantial degree, parental IQ.

 

While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates and the mechanisms of inheritance. IQ scores are used as predictors of educational achievement, special needs, job performance and income. They are also used to study IQ distributions in populations and the correlations between IQ and other variables. The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century, a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities.

References

Intelligence Quotient

Emotional Quotient vs. Intelligence Quotient

Intelligence

Intelligence Quotient (IQ) Terms

Intelligence and Intelligence Quotient (IQ)

 

Seminar on Emotional intelligence

Emotional intelligence (EI) is the ability to identify, assess, and control the emotions of oneself, of others, and of groups. Various models and definitions have been proposed of which the ability and trait EI models are the most widely accepted in the scientific literature. Ability EI is usually measured using maximum performance tests and has stronger relationships with traditional intelligence, whereas trait EI is usually measured using self-report questionnaires and has stronger relationships with personality. Criticisms have centered on whether the construct is a real intelligence and whether it has incremental validity over IQ and the Big Five personality dimensions.

 

    

Substantial disagreement exists regarding the definition of EI, with respect to both terminology and operationalizations. Currently, there are three main models of EI: Ability EI model Mixed models of EI (usually subsumed under trait EI) Trait EI model Different models of EI have led to the development of various instruments for the assessment of the construct. While some of these measures may overlap, most researchers agree that they tap different constructs.

References

Emotional Intelligence
Emotional Intelligence: What Is It?
Emotional Intelligence
Emotional Intelligence in the Workplace
Can emotional intelligence be learned

Seminar on Positive thinking

Optimism is a mental attitude or world view that interprets situations and events as being best (optimized), meaning that in some way for factors that may not be fully comprehended, the present moment is in an optimum state. The concept is typically extended to include the attitude of hope for future conditions unfolding as optimal as well. The more broad concept of optimism is the understanding that all of nature, past, present and future, operates by laws of optimization along the lines of Hamilton's principle of optimization in the realm of physics. This understanding, although criticized by counter views such as pessimism, idealism and realism, leads to a state of mind that believes everything is as it should be, and that the future will be as well. A common idiom used to illustrate optimism versus pessimism is a glass with water at the halfway point, where the optimist is said to see the glass as half full, but the pessimist sees the glass as half empty. The word is originally derived from the Latin optimum, meaning "best." Being optimistic, in the typical sense of the word, ultimately means one expects the best possible outcome from any given situation. This is usually referred to in psychology as dispositional optimism. Researchers sometimes operationalize the term differently depending on their research, however. For example, Martin Seligman and his fellow researchers define it in terms of explanatory style, which is based on the way one explains life events. As for any trait characteristic, there are several ways to evaluate optimism, such as various forms of the Life Orientation Test, for the original definition of optimism, or the Attributional Style Questionnaire designed to test optimism in terms of explanatory style.

References

Positive thinking:
Positive Thinking
POSITIVE ATTITUDE BUILDING
Attitude is Everything
SIX THINKING HATS
Positive Thinking & Behavior

Sunday, September 16, 2012

Seminar on Solaris OS

Solaris is a Unix operating system originally developed by Sun Microsystems. It superseded their earlier SunOS in 1993. Oracle Solaris, as it is now known, has been owned by Oracle Corporation since Oracle's acquisition of Sun in January 2010. Solaris is known for its scalability, especially on SPARC systems, and for originating many innovative features such as DTrace, ZFS and Time Slider. Solaris supports SPARC-based and x86-based workstations and servers from Sun and other vendors, with efforts underway to port to additional platforms. Solaris is registered as compliant with the Single Unix Specification.

             

Solaris was historically developed as proprietary software, then in June 2005 Sun Microsystems released most of the codebase under the CDDL license, and founded the OpenSolaris open source project. With OpenSolaris, Sun wanted to build a developer and user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution and the development model. Just ten days before the internal Oracle memo announcing this decision to employees was "leaked", Garrett D'Amore had announced the illumos project, creating a fork of the Solaris kernel and launching what has since become a thriving alternative to Oracle Solaris.

References

SOLARIS VS. LINUX
Securing Solaris Systems
Introduction to Solaris System

SUN SOLARIS OPERATING SYSTEM

Unix Fundamentals ( Solaris System Administration )

Seminar on Grub bootloader

GNU GRUB (short for GNU GRand Unified Bootloader) is a boot loader package from the GNU Project. GRUB is the reference implementation of the Multiboot Specification, which provides a user the choice to boot one of multiple operating systems installed on a computer or select a specific kernel configuration available on a particular operating system's partitions.

 

 

GNU GRUB was developed from a package called the Grand Unified Bootloader (a play on Grand Unified Theory). It is predominantly used for Unix-like systems. The GNU operating system uses GNU GRUB as its boot loader, as do most Linux distributions. The Solaris operating system has used GRUB as its boot loader on x86 systems, starting with the Solaris 10 1/06 release.

References

Grub
Bootloader / multi-boot
Linux Booting Procedure
GRand Unified Boot

Technorati Tags: ,

Saturday, September 15, 2012

Linux Kernel

The Linux kernel is the operating system kernel used by the Linux family of Unix-like operating systems. It is one of the most prominent examples of free and open source software.[8] The Linux kernel is released under the GNU General Public License version 2 (GPLv2) (plus some firmware images with various non-free licenses), and is developed by contributors worldwide. Day-to-day development discussions take place on the Linux kernel mailing list.

 

The Linux kernel was initially conceived and created by Finnish computer science student Linus Torvalds in 1991. Linux rapidly accumulated developers and users who adapted code from other free software projects for use with the new operating system. The Linux kernel has received contributions from thousands of programmers. Many Linux distributions have been released based upon the Linux kernel.

LINUX Kernel
Linux Kernel Organization
The Linux Kernel: Introduction