Thanks

Thanks for visiting this site. This site can provide you the latest seminar topics for any Engineering stream may it be Computer Science, Electronics, Information Technology or Mechanical.

Saturday, February 21, 2009

EDI

EDI has no single consensus definition .Two generally accepted definitions are : Standardized format for communication of business information between computer applications . Computer- to- computer exchange of information between companies, using an industry standard format.In short , Electronic Data Interchange (EDI) is the computer-to-computer exchange of business information using a public standard. EDI is a central part of Electronic Commerce (EC), because it enables businesses to exchange business information electronically much faster, cheaper and more accurately than is possible using paper-based systems. Electronic Data Interchange, consists of data that has been put into a standard format and is electronically transferred between trading partners.Often ,an acknowledgement is returned to the sender informing them that the data was received. The term EDI is often used synonymously with the term EDT. These two terms are indeed different and should not be used interchangeably.EDI VS EDTThe terms EDI and EDT are often misused .¢ EDT, Electronic Data Transfer, is simply sending a file electronically to a trading partner.¢ Although EDI documents are sent electronically, they are sent in a standard format.This standard format is what makes EDI different than EDT.HISTORY OF EDIThe government did not invent EC/EDI; it is merely taking advantage of an established technology that has been widely used in the private sector for the last few decades. EDI was first used in the transportation industry more than 20 years ago. Ocean, motor, air, and rail carriers and the associated shippers, brokers, customs, freight forwarders, and bankers used it.Developed in 1960 s to accelerate movement of documents.Widely employed in automotive , retail , transportation & international trade since mid-80s .Steadily growing.EDI FEATURES# Independent of trading partners internal computerized application systems.# Interfaces with internal application systems rather than being integrated with them.# Not limited by differences in computer or communications equipment of trading companies.# Consists only of business data, not verbiage or free-form messages.Let s take a high level look at the EDI process. In a typical example , a car manufacturing company is a trading partner with an insurance company. The human resources department at the car manufacturing company has a new employee who needs to be enrolled in an insurance plan. The HR representative enters the individual into the computer. The new employee s data is mapped into a standard format and sent electronically to the insurance company. The insurance company maps the data out of the standard format and into a format that is usable with their computer. An acknowledgment is automatically generated by the insurance company and sent to the car manufacturer informing them that the data was received.Hence, in order to summarise the EDI process , the sequence of events in any EDI transaction are as follows :The sender s own business application system assembles the data to be transmitted .This data is translated into an EDI standard format (i.e., transaction set) .The transaction set is transmitted either through a third party network ( eg : VAN) or directly to the receiver s EDI translation system .The transaction set, in EDI standard format, is translated into files that are usable by the receiver s business application system .The files are processed using the receiver s business application system .

The SAT(Sim Application Toolkit)

The SAT (SIM Application Toolkit) provides a flexible interface through which developers can build services and MMI (Man Machine Interface) in order to enhance the functionality of the mobile. This module is not designed for service developers, but network engineers who require a grounding in the concepts of the SAT and how it may impact on network architecture and performance. It explores the basic SAT interface along with the architecture required in order to deliver effective SAT based services to the handset.

Real Time Operating System

Within the last ten years real-time systems research has been transformed from a niche industry into a mainstream enterprise with clients in a wide variety of industries and academic disciplines. It will continue to grow in importance and affect an increasing number of industries as many of the reasons for the rise of its prominence will persist for the foreseeable future.What is RTOS?Real Time Computing and Real Time Operating Systems( RTOS ) is an emerging discipline in software engineering. This is an embedded technology wherebythe application software does the dual function of operating system also. In RTOS thecorrectness of the system depends not only on the logical result but also on the time atwhich the results are obtained.Real-time System>> Provides deterministic response to external events>> Has the ability to process data at its rate of occurrence>> Is deterministic in its functional & timing behavior>> Whose timing is analyzed in the worst cases not in the typical, normal cases to guarantee a limiting response in any circumstances. The seminar will basically provide a practical understanding of the goals, structure and operation of a real-time operating system (RTOS). The basic concepts of real-time system like the RTOS Kernel will be given a detailed description. The structure of the kernel is discussed, stressing the factors which affect response times and performance. Examples of RTOS functions such as scheduling, interrupt processing and intertask communication structures will also be discussed. Features of commercially available RTOS products are also presented.A real-time system is one where the timeliness of the result of a calculation is important Examples include military weapons systems, factory control systems, and Internet video and audio streaming. Different definitions of real-time systems exist. Here are just a few:- Real-time computing is computing where system correctness depends not only on the correctness of the logical result of the computation but also on the result delivery time.- A Real-Time System is an interactive system that maintains an on-going relationship with an asynchronous environment, i.e. an environment that progresses irrespective of the Real Time System, in an uncooperative manner.- Real-time (software) (IEEE 610.12 - 1990): Pertaining a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results may be used to control, monitor, or respond in a timely manner to the external process.From the above definitions its understood that in Real Time Systems, the TIME is the biggest constraint. This makes real time systems different from ordinary systems. Thus in RTS data needs to be processed at some regular and timely rate. Also it should have fast response to events occurring at nonregular rates. In real world systems there is some delay between presentation of inputs and appearance of all associated outputs called the Response time. Thus a real time system must satisfy explicit response time constraints or risk severe consequences including failure.Real - Time Systems and Real - Time Operating SystemsTimeliness is the single most important aspect of a real -time system. These systems respond to a series of external inputs, which arrive in an unpredictable fashion. The real-time systems process these inputs, take appropriate decis ions and also generate output necessary to control the peripherals connected to them. As defined by Donald Gillies A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time in which the result is produced. If the timing constraints are not met, system failure is said to have occurred. It is essential that the timing constraints of the system are guaranteed to be met. Guaranteeing timing behaviour requires that the system be predictable.The design of a real -time system must specify the timing requirements of the system and ensure that the system performance is both correct and timely. There are three types of time constraints:¢ Hard: A late response is incor rect and implies a system failure. An example of such a system is of medical equipment monitoring vital functions of a human body, where a late response would be considered as a failure.¢ Soft: Timeliness requirements are defined by using an average respons e time. If a single computation is late, it is not usually significant, although repeated late computation can result in system failures. An example of such a system includes airlines reservation systems.¢ Firm: This is a combination of both hard and soft t imeliness requirements. The computation has a shorter soft requirement and a longer hard requirement. For example, a patient ventilator must mechanically ventilate the patient a certain amount in a given time period. A few seconds delay in the initiation of breath is allowed, but not more than that.One need to distinguish between on -line systems such as an airline reservation system, which operates in real-time but with much less severe timeliness constraints than, say, a missile control system or a telephone switch. An interactive system with better response time is not a real-time system. These types of systems are often referred to as soft real time systems. In a soft real -time system (such as the airline reservation system) late data is still good dat a. However, for hard real -time systems, late data is bad data. In this paper we concentrate on the hard and firm real-time systems only.Most real -time systems interface with and control hardware directly. The software for such systems is mostly custom -developed. Real -time Applications can be either embedded applications or non -embedded (desktop) applications. Real -time systems often do not have standard peripherals associated with a desktop computer, namely the keyboard, mouse or conventional display monitors. In most instances, real-time systems have a customized version of these devices.

Biometrics

Biometrics literally means life measurement. Biometrics is the science and technology of measuring and statistically analyzing biological data. In information technology, biometrics usually refers to technologies for measuring and analyzing human body characteristics such as fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand measurements, especially for authenticating someone. Often seen in science-fiction action adventure movies, face pattern matchers and body scanners may emerge as replacements for computer passwords So, Biometric systems can be defined as automated methods of verifying or recognizing the identity of a living person based on a physiological or behavior characteristic .Automated methods By this we mean that the analysis of the data is done by a computer with little or no human intervention. Traditional fingerprint matching and showing your drivers license or other forms of photo ID when proving your identity are examples of such systems.Verification and recognition This sets forth the two principal applications of biometric systems. Verification is where the user lays claim to an identity and the system decides whether they are who they say they are. It s analogous to a challenge/response protocol; the system challenges the user to prove their identity, and they respond by providing the biometric to do so. Recognition is where the user presents the biometric, and the system scans a database and determines the identity of the user automatically.Living person This points out the need to prevent attacks where copy of the biometric of an authorized user is presented. Biometric systems should also prevent unauthorized users from gaining access when they are in possession of the body part of an authorized user necessary for the measurement.Physiological and behavioral characteristics This defines the two main classes of biometrics. Physiological characteristics are physical traits, like fingerprint or retina that are direct parts of the body. Behavioral characteristics are those that are based upon what we do, such as voiceprint and typing patterns. While physiological traits are usually more stabile than behavioral traits, systems using them are typically more intrusive and more expensive to implement.

E-Commerce

E-commerce is the application of information technology to support business processes and the exchange of goods and services. E-cash came into being when people began to think that if we can store, forward and manipulate information, why can t we do the same with money. Both blanks and post offices centralise distribution, information and credibility. E-money makes it possible to decentralise these functions.Electronic data interchange, which is the subset of e-com, is a set of data definitions that permits business forms to be exchanged electronically. The different payment schemes E-cash, Net-cash and PayMe system and also smart card technology is also. The foundation of all requirements for commerce over the world wide web is secured system of payment so various security measures are adopted over the Internet.E-commerce represents a market worth potentiality hundreds of billions of dollars in just a few years to come. So it provides enormous opportunities for business. It is expected that in near future, electronic transaction will be as popular, if not more that the credit card purchases today.Business is about information. It is about the right people having the right information at the right time. Exchanging the information efficiently and accurately will determine the success of the business.There are three phases of implementation of E-Commerce.Replace manual and paper-based operations with electronic alternativesRethink and simplify the information flowsUse the information flows in new and dynamic waysSimply replacing the existing paper-based system will reap new benefits. It may reduce administrative costs and improve the level of accuracy in exchanging data, but it does not address doing business efficiently. E-Commerce application can help to reshape the ways to do business.

Rapid Prototyping

In the manufacturing arena, productivity is achieved by guiding a product from concept to market quickly and inexpensively. In most of the industries physical models called prototypes are invariably prepared and subjected to various tests as part of the design evaluation process. Conventional prototyping may take weeks or even months, depending on the method used. Therefore people thought of developing processes that would directly give the physical prototype from the CAD model without going through the various manufacturing steps. This led to the development of a class of processes that are known as Rapid prototyping.Rapid prototyping automates the fabrication of a prototype part from a three-dimensional (3D) CAD drawing. Rapid prototyping can be a quicker, more cost-effective means of building prototypes as opposed to conventional methods.

Internet Telephony

The internet has begun as a communication network to satisfy the collaboration requirement of the government, the universities and corporate researches. Till now the internet has been optimized for efficient data communication between computers This immense success of data transmission over the packet switched network has led to the idea of transmitting voice over the internet . The term internet telephony has evolved to infer a range of different services. In general it refers to the transport of real -time media such as voice and video over the internet to provide interactive communication among the internet users. He parties involved may access the internet via PC a stand -alone internet protocol (IP) enabled device or even by dialing up to a gateway from the handset of a traditional public switched telephone network (PSTN).It introduces entirely new and enhanced way of communication.IP telephony involves the use of the internet to transmit real -time voice from one PC to another PC or a telephone. The technology involves digitisation of speech and splitting it into data packets that are transmitted over the internet . The compressed data is then re-assembled at the receiving end .This differs from the conventional public switched telephone network (PSTN), since the communication and transmission are performed across IP networks as against conventional switched networks.

Java Ring

A Java Ring is a finger ring that contains a small microprocessor with built-in capabilities for the user, a sort of smart card that is wearable on a finger. Sun Microsystem s Java Ring was introduced at their JavaOne Conference in 1998 and, instead of a gemstone, contained an inexpensive microprocessor in a stainless-steel iButton running a Java virtual machine and preloaded with applets (little application programs). The rings were built by Dallas Semiconductor. Workstations at the conference had ring readers installed on them that downloaded information about the user from the conference registration system. This information was then used to enable a number of personalized services. For example, a robotic machine made coffee according to user preferences, which it downloaded when they snapped the ring into another ring reader. Although Java Rings aren t widely used yet, such rings or similar devices could have a number of real-world applications, such as starting your car and having all your vehicle s components (such as the seat, mirrors, and radio selections) automatically adjust to your preferences.The Java Ring is an extremely secure Java-powered electronic token with a continuously running, unalterable real-time clock and rugged packaging, suitable for many applications. The jewel of the Java Ring is the Java iButton -- a one-million transistor, single chip trusted microcomputer with a powerful Java Virtual Machine (JVM) housed in a rugged and secure stainless-steel case. The Java Ring is a stainless-steel ring, 16-millimeters (0.6 inches) in diameter, that houses a 1-million-transistor processor, called an iButton. The ring has 134 KB of RAM, 32 KB of ROM, a real-time clock and a Java virtual machine, which is a piece of software that recognizes the Java language and translates it for the user s computer system.The Ring, first introduced at JavaOne Conference, has been tested at Celebration School, an innovative K-12 school just outside Orlando, FL. The rings given to students are programmed with Java applets that communicate with host applications on networked systems. Applets are small applications that are designed to be run within another application. The Java Ring is snapped into a reader, called a Blue Dot receptor, to allow communication between a host system and the Java Ring. Designed to be fully compatible with the Java Card 2.0 standard the processor features a high-speed 1024-bit modular exponentiator fro RSA encryption, large RAM and ROM memory capacity, and an unalterable real time clock. The packaged module has only a single electric contact and a ground return, conforming to the specifications of the Dallas Semiconductor 1-Wire bus. Lithium-backed non-volatile SRAM offers high read/write speed and unparallel tamper resistance through near-instantaneous clearing of all memory when tampering is detected, a feature known as rapid zeroization.Data integrity and clock function are maintained for more than 10 years. The 16-millimeter diameter stainless steel enclosure accomodates the larger chip sizes needed for up to 128 kilobytes of high-speed nonvolatile static RAM. The small and extremely rugged packaging of the module allows it to attach to the accessory of your choice to match individual lifestyles, such as key fob, wallet, watch, necklace, bracelet, or finger ring.

Cell Phone Viruses and Security

As cell phones become a part and parcel of our life so do the threats imposed to them is also on the increase. Like the internet, today even the cell phones are going online with the technologies like the edge, GPRS etc. This online network of cellphones has exposed them to the high risks caused by malwares viruses, worms and Trojans designed for mobile phone environment. The security threat caused by these malwares are so severe that a time would soon come that the hackers could infect mobile phones with malicious software that will delete any personal data or can run up a victim s phone bill by making toll calls.All these can lead to overload in mobile networks, which can eventually lead them to crash and then the financial data stealing which poises risk factors for smart phones. As the mobile technology is comparatively new and still on the developing stages compared to that of internet technology, the anti virus companies along with the vendors of phones and mobile operating systems have intensified the research and development activities on this growing threat, with a more serious perspective.

10 Gigabit Ethernet

Definition
From its origin more than 25 years ago, Ethernet has evolved to meet the increasing demands of packet-switched networks. Due to its proven low implementation cost, its known reliability, and relative simplicity of installation and maintenance, its popularity has grown to the point that today nearly all traffic on the Internet originates or ends with an Ethernet connection. Further, as the demand for ever-faster network speeds has grown, Ethernet has been adapted to handle these higher speeds and the concomitant surges in volume demand that accompany them.
The One Gigabit Ethernet standard is already being deployed in large numbers in both corporate and public data networks, and has begun to move Ethernet from the realm of the local area network out to encompass the metro area network. Meanwhile, an even faster 10 Gigabit Ethernet standard is nearing completion. This latest standard is being driven not only by the increase in normal data traffic but also by the proliferation of new, bandwidth-intensive applications.
The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily in that it will only function over optical fiber, and only operate in full-duplex mode, meaning that collision detection protocols are unnecessary. Ethernet can now step up to 10 gigabits per second, however, it remains Ethernet, including the packet format, and the current capabilities are easily transferable to the new draft standard.
In addition, 10 Gigabit Ethernet does not obsolete current investments in network infrastructure. The task force heading the standards effort has taken steps to ensure that 10 Gigabit Ethernet is interoperable with other networking technologies such as SONET. The standard enables Ethernet packets to travel across SONET links with very little inefficiency.
Ethernet's expansion for use in metro area networks can now be expanded yet again onto wide area networks, both in concert with SONET and also end-to-end Ethernet. With the current balance of network traffic today heavily favoring packet-switched data over voice, it is expected that the new 10 Gigabit Ethernet standard will help to create a convergence between networks designed primarily for voice, and the new data centric networks.10 Gigabit Ethernet Technology Overview
The 10 Gigabit Ethernet Alliance (10GEA) was established in order to promote standards-based 10 Gigabit Ethernet technology and to encourage the use and implementation of 10 Gigabit Ethernet as a key networking technology for connecting various computing, data and telecommunications devices. The charter of the 10 Gigabit Ethernet Alliance includes:
" Supporting the 10 Gigabit Ethernet standards effort conducted in the IEEE 802.3 working group
" Contributing resources to facilitate convergence and consensus on technical specifications
" Promoting industry awareness, acceptance, and advancement of the 10 Gigabit Ethernet standard
" Accelerating the adoption and usage of 10 Gigabit Ethernet products and services
" Providing resources to establish and demonstrate multi-vendor interoperability and generally encourage and promote interoperability and interoperability events

Robotic Surgery

Definition
The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery.
The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures.
Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way.
The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors.Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line.
We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.

Socket Programming

Definition
Sockets are interfaces that can "plug into" each other over a network. Once so "plugged in", the programs so connected communicate. A "server" program is exposed via a socket connected to a certain /etc/services port number. A "client" program can then connect its own socket to the server's socket, at which time the client program's writes to the socket are read as stdin to the server program, and stdout from the server program are read from the client's socket reads.
Before a user process can perform I/O operations, it calls Open to specify and obtain permissions for the file or device to be used. Once an object has been opened, the user process makes one or more calls to Read or Write data. Read reads data from the object and transfers it to the user process, while Write transfers data from the user process to the object. After all transfer operations are complete, the user process calls Close to inform the operating system that it has finished using that object.
When facilities for InterProcess Communication (IPC) and networking were added, the idea was to make the interface to IPC similar to that of file I/O. In Unix, a process has a set of I/O descriptors that one reads from and writes to. These descriptors may refer to files, devices, or communication channels (sockets). The lifetime of a descriptor is made up of three phases: creation (open socket), reading and writing (receive and send to socket), and destruction (close socket).
History Sockets are used nearly everywhere, but are one of the most severely misunderstood technologies around. This is a 10,000 foot overview of sockets. It's not really a tutorial - you'll still have work to do in getting things working. It doesn't cover the fine points (and there are a lot of them), but I hope it will give you enough background to begin using them decently.I'm only going to talk about INET sockets, but they account for at least 99% of the sockets in use. And I'll only talk about STREAM sockets - unless you really know what you're doing (in which case this HOWTO isn't for you!), you'll get better behavior and performance from a STREAM socket than anything else. I will try to clear up the mystery of what a socket is, as well as some hints on how to work with blocking and non-blocking sockets. But I'll start by talking about blocking sockets. You'll need to know how they work before dealing with non-blocking sockets.
Part of the trouble with understanding these things is that "socket" can mean a number of subtly different things, depending on context. So first, let's make a distinction between a "client" socket - an endpoint of a conversation, and a "server" socket, which is more like a switchboard operator. The client application (your browser, for example) uses "client" sockets exclusively; the web server it's talking to uses both "server" sockets and "client" sockets. Of the various forms of IPC (Inter Process Communication), sockets are by far the most popular. On any given platform, there are likely to be other forms of IPC that are faster, but for cross-platform communication, sockets are about the only game in town.
They were invented in Berkeley as part of the BSD flavor of Unix. They spread like wildfire with the Internet. With good reason -- the combination of sockets with INET makes talking to arbitrary machines around the world unbelievably easy (at least compared to other schemes).

Intelligent Software Agents

Definition
Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.
Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.
If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.
The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.
Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.
1.2 DEFINITION OF INTELLIGENT SOFTWARE AGENTS:
Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as:
"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."

TouchScreens

Introduction
A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen. Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use.
Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information
A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you.
History Of Touch Screen TechnologyA touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen.
Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection.Specific areas of the screen are defined as "buttons" that the operator selects simply by touching them. One significant advantage to touch screen applications is that each screen can be customized to reflect only the valid options for each phase of an operation, greatly reducing the frustration of hunting for the right key or function.
Pen-based systems, such as the Palm Pilot® and signature capture systems, also use touch technology but are not included in this article. The essential difference is that the pressure levels are set higher for pen-based systems than for touch.Touch screens come in a wide range of options, from full color VGA and SVGA monitors designed for highly graphic Windows® or Macintosh® applications to small monochrome displays designed for keypad replacement and enhancement.
Specific figures on the growth of touch screen technology are hard to come by, but a 1995 study last year by Venture Development Corporation predicted overall growth of 17%, with at least 10% in the industrial sector.Other vendors agree that touch screen technology is becoming more popular because of its ease-of-use, proven reliability, expanded functionality, and decreasing cost.
A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen causes a voltage or signal change. This voltage change is used to determine the location of the touch to the screen.

CyberTerrorism

Definition
Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least.
The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them.
The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist.
We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, theconsequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future…

WINDOWS DNA

Definition
For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.
The increased presence of Internet technologies is enabling global sharing of information-not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.
It is possible to think of these new Internet applications needing to handle literally millions of users-a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.
Introducing Windows DNA: Framework for a New Generation of Computing Solutions
Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.
Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?

DNA Chips

Introduction
DNA chips also known as micro arrays are very significant technological development in molecular biology and are perhaps most efficient tool available for functional genomics today. An evident from the name micro array essentially consists of an array of either Oligonucleotides or cDNA fixed on a substrate. There has been an explosion of information in the field of genomics in the last five years. Genomes of several organisms have been fully sequenced. The next step necessarily involves the analysis of comparative expression levels of various genes and to identify all the possible variations of sequence present in each of the gene or in the noncording regulatory regions obtained from a particular population. Handling such large volumes of data requires techniques which necessitate miniaturization and a massive scale parallelism. Hence the DNA chip comes in to the picture.
Researchers such as those at the University of Alaska Fairbanks' (UAF) Institute of Arctic Biology (IAB) and the Arctic Region Supercomputing Center (ARSC) seek to understand how organisms deal with the demands of their natural environment-as shown by the discovery of many remarkable adaptations that organisms have acquired living in the extremes of Alaska. Many of these adaptations have significant biomedical relevance in areas such as stroke, cardiovascular disease, and physiological stress. Somehow, our wild counterparts have adapted to severe environmental demands over long periods of time. Simultaneous to this research, scientists are also investigating the molecular changes that can be observed in humans as a result of their environment, such as through smoking or exposure to contaminants.
This push in research has resulted in the integration with life science research of approaches from many fields, including engineering, physics, mathematics, and computer science. One of the most well-known results of this is the Human Genome Project. Through this project, researchers * were able to design instruments capable of performing many different types of molecular measurements so that statistically significant and large scale sampling of these molecules could be achieved. Now, biomedical research is producing data that show researchers that things are not always where they expected them to be, while at the same time researchers are at a rapidly expanding phase of discovery and analysis of large, highly repeatable measurements of complex molecular systems.
One of the more important and generally applicable tools that has emerged from this type of research is called DNA micro arrays, or DNA chip technology This technology uses the fundamentals of Watson and Crick base-pairing along with hybridization to customize applications of DNA micro arrays to simultaneously interrogate a large number of genetic loci (those locations on the DNA molecules that have differing biological roles). The result of this type of analysis is that experiments that once tool ten years in thousands of laboratories can now be accomplished with a small number of experiments in just one laboratory.

Firewire

Definition
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster .
Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems.
In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative high-performance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited.
TOPOLOGYThe 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the busFireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire:
" A 10-bit bus ID that is used to determine which FireWire bus the data came from " A 6-bit physical ID that identifies which device on the bus sent the data " A 48-bit storage area that is capable of addressing 256 terabytes of information for each node!
The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera.The 1394 protocol supports both asynchronous and isochronous data transfers.
Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers. Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows error-checking and retransmission mechanisms to take place.

Biochips

Most of us won’t like the idea of implanting a biochip in our body that identifies us uniquely and can be used to track our location. That would be a major loss of privacy. But there is a flip side to this! Such biochips could help agencies to locate lost children, downed soldiers and wandering Alzheimer’s patients. The human body is the next big target of chipmakers. It won’t be long before biochip implants will come to the rescue of sick, or those who are handicapped in someway. Large amount of money and research has already gone into this area of technology. Anyway, such implants have already experimented with. A few US companies are selling both chips and their detectors. The chips are of size of an uncooked grain of rice, small enough to be injected under the skin using a syringe needle. They respond to a signal from the detector, held just a few feet away, by transmitting an identification number. This number is then compared with the database listings of register pets. Daniel Man, a plastic surgeon in private practice in Florida, holds the patent on a more powerful device: a chip that would enable lost humans to be tracked by satellite. A biochip is a collection of miniaturized test sites (micro arrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to get higher throughput and speed. Typically, a biochip’s surface area is not longer than a fingernail. Like a computer chip that can perform millions of mathematical operation in one second, a biochip can perform thousands of biological operations, such as decoding genes, in a few seconds. A genetic biochip is designed to “freeze” into place the structures of many short strands of DNA (deoxyribonucleic acid), the basic chemical instruction that determines the characteristics of an organism. Effectively, it is used as a kind of “test tube” for real chemical samples. A specifically designed microscope can determine where the sample hybridized with DNA strands in the biochip. Biochips helped to dramatically increase the speed of the identification of the estimated 80,000 genes in human DNA, in the world wide research collaboration known as the Human Genome Project. The microchip is described as a sort of “word search” function that can quickly sequence DNA. In addition to genetic applications, the biochip is being used in toxicological, protein, and biochemical research. Biochips can also be used to rapidly detect chemical agents used in biological warfare so that defensive measures can be taken. Motorola, Hitachi, IBM, Texas Instruments have entered into the biochip business. The biochip implants system consists of two components: a transponder and a reader or scanner. The transponder is the actual biochip implant. The biochip system is radio frequency identification (RFID) system, using low-frequency radio signals to communicate between the biochip and reader. The reading range or activation range, between reader and biochip is small, normally between 2 and 12 inches. The transponder The transponder is the actual biochip implant. It is a passive transponder, meaning it contains no battery or energy of its own. In comparison, an active transponder would provide its own energy source, normally a small battery. Because the passive contains no battery, or nothing to wear out, it has a very long life up to 99 years, and no maintenance. Being passive, it is inactive until the reader activates it by sending it a low-power electrical charge. The reader reads or scans the implanted biochip and receives back data (in this case an identification number) from the biochips. The communication between biochip and reader is via low-frequency radio waves. Since the communication is via very low frequency radio waves it is nit at all harmful to the human body. The biochip-transponder consists of four parts; computer microchip, antenna coil, capacitor and the glass capsule. Computer microchips The microchip stores a unique identification number from 10 to 15 digits long. The storage capacity of the current microchips is limited, capable of storing only a single ID number. AVID (American Veterinary Identification Devices), claims their chips, using a nnn-nnn-nnn format, has the capability of over 70 trillion unique numbers. The unique ID number is “etched” or encoded via a laser onto the surface of the microchip before assembly. Once the number is encoded it is impossible to alter. The microchip also contains the electronic circuitry necessary to transmit the ID number to the “reader”. Antenna Coil This is normally a simple, coil of copper wire around a ferrite or iron core. This tiny, primitive, radio antenna receives and sends signals from the reader or scanner. Tuning Capacitor The capacitor stores the small electrical charge (less than 1/1000 of a watt) sent by the reader or scanner, which activates the transponder. This “activation” allows the transponder to send back the ID number encoded in the computer chip. Because “radio waves” are utilized to communicate between the transponder and reader, the capacitor is tuned to the same frequency as the reader. Glass Capsule The glass capsule “houses” the microchip, antenna coil and capacitor. It is a small capsule, the smallest measuring 11 mm in length and 2 mm in diameter, about the size of an uncooked grain of rice. The capsule is made of biocompatible material such as soda lime glass. After assembly, the capsule is hermetically (air-tight) sealed, so no bodily fluids can touch the electronics inside. Because the glass is very smooth and susceptible to movement, a material such as a polypropylene polymer sheath is attached to one end of the capsule. This sheath provides a compatible surface which the boldly tissue fibers bond or interconnect, resulting in a permanent placement of the biochip. The biochip is inserted into the subject with a hypodermic syringe. Injection is safe and simple, comparable to common vaccines. Anesthesia is not required nor recommended. In dogs and cats, the biochip is usually injected behind the neck between the shoulder blades. The reader The reader consists of an “exciter coil” which creates an electromagnetic field that, via radio signals, provides the necessary energy (less than 1/1000 of a watt) to “excite” or “activate” the implanted biochip. The reader also carries a receiving coil that receives the transmitted code or ID number sent back from the “activated” implanted biochip. This all takes place very fast, in milliseconds. The reader also contains the software and components to decode the received code and display the result in an LCD display. The reader can include a RS-232 port to attach a computer. How it works The reader generates a low-power, electromagnetic field, in this case via radio signals, which “activates” the implanted biochip. This “activation” enables the biochip to send the ID code back to the reader via radio signals. The reader amplifies the received code, converts it to digital format, decodes and displays the ID number on the reader’s LCD display. The reader must normally be between 2 and 12 inches near the biochip to communicate. The reader and biochip can communicate through most materials, except metal. Intelligent Software Agents Definition Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data. Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time. If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals. The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished. Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal. Definition of intelligent software agents: Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as: "A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."

Face Recognition Technology

Definition
Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most non-intrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer. Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software: Distance between eyes
" Width of nose
" Depth of eye sockets
" Cheekbones
" Jaw line
" Chin
These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process. SoftwareFacial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include:
" Fingerprint scan
" Retina scan
" Voice identification
Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are:
Detection - When the system is attached to a video surveillance system, the recognition software searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected.
2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.

Adding Intelligence to Internet

DefinitionSatellites have been used for years to provide communication network links. Historically, the use of satellites in the Internet can be divided into two generations. In the first generation, satellites were simply used to provide commodity links (e.g., T1) between countries. Internet Protocol (IP) routers were attached to the link endpoints to use the links as single-hop alternatives to multiple terrestrial hops. Two characteristics marked these first-generation systems: they had limited bandwidth, and they had large latencies that were due to the propagation delay to the high orbit position of a geosynchronous satellite.
In the second generation of systems now appearing, intelligence is added at the satellite link endpoints to overcome these characteristics. This intelligence is used as the basis for a system for providing Internet access engineered using a collection or fleet of satellites, rather than operating single satellite channels in isolation. Examples of intelligent control of a fleet include monitoring which documents are delivered over the system to make decisions adaptively on how to schedule satellite time; dynamically creating multicast groups based on monitored data to conserve satellite bandwidth; caching documents at all satellite channel endpoints; and anticipating user demands to hide latency.
This paper examines several key questions arising in the design of a satellite-based system: ¢ Can international Internet access using a geosynchronous satellite be competitive with today's terrestrial networks? ¢ What elements constitute an "intelligent control" for a satellite-based Internet link? ¢ What are the design issues that are critical to the efficient use of satellite channels?The paper is organized as follows. The next section, Section 2, examines the above questions in enumerating principles for second-generation satellite delivery systems. Section 3 presents a case study of the Internet Delivery System (IDS), which is currently undergoing worldwide field trials.
Issues In Second-Generation Satellite Link Control
Can international Internet access using a geosynchronous satellite be competitive with today's terrestrial networks?The first question is whether it makes sense today to use geosynchronous satellite links for Internet access. Alternatives include wired terrestrial connections, low earth orbiting (LEO) satellites, and wireless wide area network technologies (such as Local Multipoint Distribution Service or 2.4-GHz radio links in the U.S.).
We see three reasons why geosynchronous satellites will be used for some years to come for international Internet connections. The first reason is obvious: it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Geosynchronous satellites can provide immediate relief. They can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and can supplement terrestrial networks elsewhere.