Sunday, November 28, 2010

Earth Simulator

The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator Project", was a highly parallel vector supercomputer system for running global climate models to evaluate the effects of global warming and problems in solid earth geophysics. The system was developed for Japan Aerospace Exploration Agency, Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center in 1997. Construction started in October 1999, and the site officially opened on March 11, 2002. The project cost 60 billion yen.

Built by NEC, ES was based on their SX-6 architecture. It consisted of 640 nodes with eight vector processors and 16 gibibytes of computer memory at each node, for a total of 5120 processors and 10 terabytes of memory. Two nodes were installed per 1 metre x 1.4 metre x 2 metre cabinet. Each cabinet consumed 20 kW of power. The system had 700 terabytes of disk storage (450 for the system and 250 for the users) and 1.6 petabytes of mass storage in tape drives. It was able to run holistic simulations of global climate in both the atmosphere and the oceans down to a resolution of 10 km. Its performance on the LINPACK benchmark was 35.86 TFLOPS, which was almost five times faster than ASCI White.

Reference material

M-Commerce

Mobile Commerce, also known as M-Commerce or mCommerce, is the ability to conduct commerce using a mobile device, such as a mobile phone, a Personal digital assistantPDA, a smartphone, or other emerging mobile equipment such as dashtop mobile devices. Mobile Commerce has been defined as follows:

"Mobile Commerce is any transaction, involving the transfer of ownership or rights to use goods and services, which is initiated and/or completed by using mobile access to computer-mediated networks with the help of an electronic device."

Mobile commerce was born in 1997 when the first two mobile-phone enabled Coca Cola vending machines were installed in the Helsinki area in Finland. The machines accepted payment via SMS text messages. The first mobile phone-based banking service was launched in 1997 by Merita Bank of Finland, also using SMS.

In 1998, the first sales of digital content as downloads to mobile phones were made possible when the first commercial downloadable ringtones were launched in Finland by Radiolinja (now part of Elisa Oyj).

Two major national commercial platforms for mobile commerce were launched in 1999: Smart Money (http://smart.com.ph/money/) in the Philippines, and NTT DoCoMo's i-Mode Internet service in Japan. i-Mode offered a revolutionary revenue-sharing plan where NTT DoCoMo kept 9 percent of the fee users paid for content, and returned 91 percent to the content owner.

For reference

Socket Programming

Sockets are interfaces that can "plug into" each other over a network. Once so "plugged in", the programs so connected communicate. A "server" program is exposed via a socket connected to a certain /etc/services port number. A "client" program can then connect its own socket to the server's socket, at which time the client program's writes to the socket are read as stdin to the server program, and stdout from the server program are read from the client's socket reads.

Before a user process can perform I/O operations, it calls Open to specify and obtain permissions for the file or device to be used. Once an object has been opened, the user process makes one or more calls to Read or Write data. Read reads data from the object and transfers it to the user process, while Write transfers data from the user process to the object. After all transfer operations are complete, the user process calls Close to inform the operating system that it has finished using that object.

 

For Reference

Tuesday, April 20, 2010

Hi All

I am happy to tell you that we have now moved to our own domain. so you can always reach at techfusion.in .....Also, we have imported all the content from this blog to our site for future references.

Thanks!!

Saturday, March 13, 2010

video door phone

It is a solution for security and can be used in home automation as well. Video door phone has become a necessity of our life because we love our families and we want to protect them. We require a way to see the visitor and have a conversation before allowing the visitor into the house.

We also wish to keep a watch on children when they are playing in the garden or in the club house.

The high quality video door phone is a state-of-the-art product which comprises of:

  • An indoor unit with a monitor
  • An outdoor unit with an in-built microphone and camera

The hands-free video door phone enables the person inside the house to see the visitor and have a conversation before entry into the house.

Reference link

Thursday, March 11, 2010

CCTV System

Closed-circuit television (CCTV) is the use of video cameras to transmit a signal to a specific place, on a limited set of monitors.

It differs from broadcast television in that the signal is not openly transmitted, though it may employ point to point wireless links. CCTV is often used for surveillance in areas that may need monitoring such as banks, casinos, airports, military installations, and convenience stores.

In industrial plants, CCTV equipment may be used to observe parts of a process from a central control room; when, for example, the environment is not suitable for humans. CCTV systems may operate continuously or only as required to monitor a particular event. A more advanced form of CCTV, utilizing Digital Video Recorders (DVRs), provides recording for possibly many years, with a variety of quality and performance options and extra features (such as motion-detection and email alerts).

Surveillance of the public using CCTV is particularly common in the UK, where there are reportedly more cameras per person than in any other country in the world. There and elsewhere, its increasing use has triggered a debate about security versus privacy.

Reference link

Sunday, March 7, 2010

WI-MAX

WiMAX, meaning Worldwide Interoperability for Microwave Access, is a telecommunications technology that provides wireless transmission of data using a variety of transmission modes, from point-to-multipoint links to portable and fully mobile internet access. The technology provides up to 10 Mbps broadband speed without the need for cables. The technology is based on the IEEE 802.16 standard (also called Broadband Wireless Access). The name "WiMAX" was created by the WiMAX Forum, which was formed in June 2001 to promote conformity and interoperability of the standard. The forum describes WiMAX as "a standards-based technology enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL".

Reference link

Tuesday, March 2, 2010

WISENET

WISENET is a wireless sensor network that monitors the environmental conditions such as light, temperature, and humidity. This network is comprised of nodes called "motes" that form an ad-hoc network to transmit this data to a computer that function as a server. The server stores the data in a database where it can later be retrieved and analyzed via a web-based interface. The network works successfully with an implementation of one sensor mote.

Introduction:
The technological drive for smaller devices using less power with greater functionality has created new potential applications in the sensor and data acquisition sectors. Low-power microcontrollers with RF transceivers and various digital and analog sensors allow a wireless, battery-operated network of sensor modules ("motes") to acquire a wide range of data. The TinyOS is a real-time operating system to address the priorities of such a sensor network using low power, hard real-time constraints, and robust communications.

The first goal of WISENET is to create a new hardware platform to take advantage of newer microcontrollers with greater functionality and more features. This involves selecting the hardware, designing the motes, and porting TinyOS. Once the platform is completed and TinyOS was ported to it, the next stage is to use this platform to create a small-scale system of wireless networked sensors.

System Description:
There are two primary subsystems (Data Analysis and Data Acquisition) comprised of three major components (Client, Server, Sensor Mote Network).
Primary Subsystems:
There are two top-level subsystems -
Data Analysis
Data Acquisition.

Data Analysis:
This subsystem is software-only (relative to WISENET). It relied on existing Internet and web (HTTP) infrastructure to provide communications between the Client and Server components. The focus of this subsystem was to selectively present the collected environmental data to the end user in a graphical manner.

Reference link

Monday, March 1, 2010

Optical fiber communication

Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world.

The process of communicating using fiber-optics involves the following basic steps: Creating the optical signal involving the use of a transmitter, relaying the signal along the fiber, ensuring that the signal does not become too distorted or weak, receiving the optical signal, and converting it into an electrical signal.

Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Due to much lower attenuation and interference, optical fiber has large advantages over existing copper wire in long-distance and high-demand applications. However, infrastructure development within cities was relatively difficult and time-consuming, and fiber-optic systems were complex and expensive to install and operate. Due to these difficulties, fiber-optic communication systems have primarily been installed in long-distance applications, where they can be used to their full transmission capacity, offsetting the increased cost. Since 2000, the prices for fiber-optic communications have dropped considerably. The price for rolling out fiber to the home has currently become more cost-effective than that of rolling out a copper based network. Prices have dropped to $850 per subscriber in the US and lower in countries like The Netherlands, where digging costs are low.

Since 1990, when optical-amplification systems became commercially available, the telecommunications industry has laid a vast network of intercity and transoceanic fiber communication lines. By 2002, an intercontinental network of 250,000 km of submarine communications cable with a capacity of 2.56 Tb/s was completed, and although specific network capacities are privileged information, telecommunications investment reports indicate that network capacity has increased dramatically since 2004.

Reference link

Lightweight Directory Access Protocol

The Lightweight Directory Access Protocol, or LDAP ,is an application protocol for querying and modifying directory services running over TCP/IP.

A directory is a set of objects with attributes organized in a logical and hierarchical manner. A simple example is the telephone directory, which consists of a list of names (of either persons or organizations) organized alphabetically, with each name having an address and phone number associated with it.

An LDAP directory tree often reflects various political, geographic, and/or organizational boundaries, depending on the model chosen. LDAP deployments today tend to use Domain Name System (DNS) names for structuring the topmost levels of the hierarchy. Deeper inside the directory might appear entries representing people, organizational units, printers, documents, groups of people or anything else that represents a given tree entry (or multiple entries).

Its current version is LDAPv3, which is specified in a series of Internet Engineering Task Force (IETF) Standard Track Requests for comments (RFCs) as detailed in RFC 4510

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), by default on TCP port 389. The client then sends an operation request to the server, and the server sends responses in return. With some exceptions, the client does not need to wait for a response before sending the next request, and the server may send the responses in any order.

The client may request the following operations:

    * Start TLS — use the LDAPv3 Transport Layer Security (TLS) extension for a secure connection
    * Bind — authenticate and specify LDAP protocol version
    * Search — search for and/or retrieve directory entries
    * Compare — test if a named entry contains a given attribute value
    * Add a new entry
    * Delete an entry
    * Modify an entry
    * Modify Distinguished Name (DN) — move or rename an entry
    * Abandon — abort a previous request
    * Extended Operation — generic operation used to define other operations
    * Unbind — close the connection (not the inverse of Bind)

Reference links

Sunday, February 28, 2010

Kerberos

Kerberos (pronounced /ˈkɛərbÉ™rÉ™s/[1]) is a computer network authentication protocol, which allows nodes communicating over a non-secure network to prove their identity to one another in a secure manner. It is also a suite of free software published by Massachusetts Institute of Technology (MIT) that implements this protocol. Its designers aimed primarily at a client-server model, and it provides mutual authentication — both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.

Kerberos builds on symmetric key cryptography and requires a trusted third party. Extensions to Kerberos can provide for the use of public-key cryptography during certain phases of authentication.

MIT developed Kerberos to protect network services provided by Project Athena. The protocol was named after the Greek mythological character Kerberos (or Cerberus), known in Greek mythology as being the monstrous three-headed guard dog of Hades. Several versions of the protocol exist; versions 1–3 occurred only internally at MIT.

Steve Miller and Clifford Neuman, the primary designers of Kerberos version 4, published that version in the late 1980s, although they had targeted it primarily for Project Athena.

Version 5, designed by John Kohl and Clifford Neuman, appeared as RFC 1510 in 1993 (made obsolete by RFC 4120 in 2005), with the intention of overcoming the limitations and security problems of version 4.

MIT makes an implementation of Kerberos freely available, under copyright permissions similar to those used for BSD. In 2007, MIT formed the Kerberos Consortium to foster continued development. Founding sponsors include vendors such as Sun Microsystems, Apple, Google, Microsoft and Centrify Corporation, and academic institutions such as Stanford University and MIT.

Reference links

Wednesday, February 24, 2010

Organic light emitting diode (OLED)

Organic light emitting diode (OLED) display technology has been grabbing headlines in recent years. Now one form of OLED displays, LIGHT EMITTING POLYMER (LEP) technology is rapidly emerging as a serious candidate for next generation flat panel displays. LEP technology promises thin, light weight emissive displays with low drive voltage, low power consumption, high contrast, wide viewing angle, and fast switching times.
One of the main attractions of this technology is the compatibility of this technology with plastic-substrates and with a number of printer based fabrication techniques, which offer the possibility of roll-to-roll processing for cost-effective manufacturing.
LEPs are inexpensive and consume much less power than any other flat panel display. Their thin form and flexibility allows devices to be made in any shape. One interesting application of these displays is electronic paper that can be rolled up like newspaper.
Cambridge Display Technology, the UK, is betting that its light weight, ultra thin light emitting polymer displays have the right stuff to finally replace the bulky, space consuming and power-hungry cathode ray tubes (CRTs) used in television screens and computer monitors and become the ubiquitous display medium of the 21st century.

Reference links

Tuesday, February 23, 2010

Augmented Reality

Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on "see-through" devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. AR systems track the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world.
Consider what AR could make routinely possible. A repairperson viewing a broken piece of equipment could see instructions highlighting the parts that need to be inspected. A surgeon could get the equivalent of x-ray vision by observing live ultrasound scans of internal organs that are overlaid on the patient's body. Soldiers could see the positions of enemy snipers who had been spotted by unmanned reconnaissance planes.
Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. In augmented reality, the user's view of the world and the computer interface literally become one.

Reference links

Monday, February 22, 2010

The Bionic Eye

The entire system runs on a battery pack that's housed with the video processing unit. When the camera captures an image -- of, say, a tree -- the image is in the form of light and dark pixels. It sends this image to the video processor, which converts the tree-shaped pattern of pixels into a series of electrical pulses that represent "light" and "dark." The processor sends these pulses to a radio transmitter on the glasses, which then transmits the pulses in radio form to a receiver implanted underneath the subject's skin. The receiver is directly connected via a wire to the electrode array implanted at the back of the eye, and it sends the pulses down the wire.

When the pulses reach the retinal implant, they excite the electrode array. The array acts as the artificial equivalent of the retina's photoreceptors. The electrodes are stimulated in accordance with the encoded pattern of light and dark that represents the tree, as the retina's photoreceptors would be if they were working (except that the pattern wouldn't be digitally encoded). The electrical signals generated by the stimulated electrodes then travel as neural signals to the visual center of the brain by way of the normal pathways used by healthy eyes -- the optic nerves. In macular degeneration and retinitis pigmentosa, the optical neural pathways aren't damaged. The brain, in turn, interprets these signals as a tree and tells the subject, "You're seeing a tree."

Reference links

Sunday, February 21, 2010

Optical Communications in Space

In telecommunications, Free Space Optics (FSO) is an optical communication technology that uses light propagating in free space to transmit data between two points. The technology is useful where the physical connections by the means of fibre optic cables are impractical due to high costs or other considerations.Optical communications, in various forms, have been used for thousands of years. The Ancient Greeks polished their shields to send signals during battle. In the modern era, semaphores and wireless solar telegraphs called heliographs were developed, using coded signals to communicate with their recipients.
Free Space Optics are additionally used for communications between spacecraft. The optical links can be implemented using infrared laser light, although low-data-rate communication over short distances is possible using LEDs. Maximum range for terrestrial links is in the order of 2-3 km, but the stability and quality of the link is highly dependent on atmospheric factors such as rain, fog, dust and heat. Amateur radio operators have achieved significantly farther distances (173 miles in at least one occasion) using incoherent sources of light from high-intensity LEDs. However, the low-grade equipment used limited bandwidths to about 4kHz. In outer space, the communication range of free-space optical communication is currently in the order of several thousand kilometers, but has the potential to bridge interplanetary distances of millions of kilometers, using optical telescopes as beam expanders. IrDA is also a very simple form of free-space optical communications.

Reference links related to this topics

Saturday, February 20, 2010

4G Wireless Systems

4G refers to the fourth generation of cellular wireless standards. It is a successor to 3G and 2G standards, with the aim to provide a wide range of data rates up to ultra-broadband (gigabit-speed) Internet access to mobile as well as stationary users. Although 4G is a broad term that has had several different and more vague definitions, this article uses 4G to refer to IMT Advanced (International Mobile Telecommunications Advanced), as defined by ITU-R.

A 4G cellular system must have target peak data rates of up to approximately 100 Mbit/s for high mobility such as mobile access and up to approximately 1 Gbit/s for low mobility such as nomadic/local wireless access, according to the ITU requirements. Scalable bandwidths up to at least 40 MHz should be provided. A 4G system is expected to provide a comprehensive and secure all-IP based solution where facilities such as IP telephony, ultra-broadband Internet access, gaming services and HDTV streamed multimedia may be provided to users.

Reference link

Bittorrent

BitTorrent is a peer-to-peer file sharing protocol used for distributing large amounts of data. BitTorrent is one of the most common protocols for transferring large files, and it has been estimated that it accounts for approximately 27-55% of all Internet traffic (depending on geographical location) as of February 2009.[1]

BitTorrent protocol allows users to distribute large amounts of data without the heavy demands on their computers that would be needed for standard Internet hosting. A standard host's servers can easily be brought to a halt if high levels of simultaneous data flow are reached. The protocol works as an alternative data distribution method that makes even small computers (e.g. mobile phones) with low bandwidth capable of participating in large data transfers.

Reference links

Thursday, February 18, 2010

Wireless USB

Wireless USB is a short-range, high-bandwidth wireless radio communication protocol created by the Wireless USB Promoter Group. Wireless USB is sometimes abbreviated as "WUSB", although the USB Implementers Forum discourages this practice and instead prefers to call the technology "Certified Wireless USB" to differentiate it from competitors. Wireless USB is based on the WiMedia Alliance's Ultra-WideBand (UWB) common radio platform, which is capable of sending 480 Mbit/s at distances up to 3 meters and 110 Mbit/s at up to 10 meters. It was designed to operate in the 3.1 to 10.6 GHz frequency range, although local regulatory policies may restrict the legal operating range for any given country.

 

Reference links

Wednesday, February 17, 2010

Tripwire

Tripwire is a reliable intrusion detection system. It is a software tool that checks to see what has changed in your system. It mainly monitors the key attribute of your files, by key attribute we mean the binary signature, size and other related data. Security and operational stability must go hand in hand, if the user does not have control over the various operations taking place then naturally the security of the system is also compromised. Tripwire has a powerful feature which pinpoints the changes that has taken place, notifies the administrator of these changes, determines the nature of the changes and provide you with information you need for deciding how to manage the change.

Tripwire Integrity management solutions monitor changes to vital system and configuration files. Any changes that occur are compared to a snapshot of the established good baseline. The software detects the changes, notifies the staff and enables rapid recovery and remedy for changes. All Tripwire installation can be centrally managed. Tripwire software's cross platform functionality enables you to manage thousands of devices across your infrastructure.

Reference link

Tuesday, February 9, 2010

Data mining

Data mining is the process of extracting patterns from data. Data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.

Data mining can be used to uncover patterns in data but is often carried out only on samples of data. The mining process will be ineffective if the samples are not a good representation of the larger body of data. Data mining cannot discover patterns that may be present in the larger body of data if those patterns are not present in the sample being "mined". Inability to find patterns may become a cause for some disputes between customers and service providers. Therefore data mining is not fool proof but may be useful if sufficiently representative data samples are collected. The discovery of a particular pattern in a particular set of data does not necessarily mean that a pattern is found elsewhere in the larger data from which that sample was drawn. An important part of the process is the verification and validation of patterns on other samples of data.

Reference links

Friday, February 5, 2010

Interactive Voice Response

nteractive Voice Response (IVR) product, interactive technology that allows a computer to detect voice and keypad inputs. IVR technology is used extensively in telecommunications, but is also being introduced into automobile systems for hands-free operation. Current deployment in automobiles revolves around satellite navigation, audio and mobile phone systems. In telecommunications, IVR allows customers to access a company’s database via a telephone touchtone keypad or by speech recognition, after which they can service their own enquiries by following the instructions. IVR systems can respond with pre-recorded or dynamically generated audio to further direct users on how to proceed. IVR systems can be used to control almost any function where the interface can be broken down into a series of simple menu choices. In telecommunications applications, such as customer support lines, IVR systems generally scale well to handle large call volumes.

It has become common in industries that have recently entered the telecom industry to refer to an Automated Attendant as an IVR. The terms Automated Attendant and IVR are distinct and mean different things to traditional telecom professionals, whereas emerging telephony and VoIP professionals often use the term IVR as a catch-all to signify any kind of telephony menu, even a basic automated attendant. The term VRU, for voice reponse unit, is sometimes used as well.

Reference links