Definition
We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out.
Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station.Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels.Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio.
The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.
Thanks
Thanks for visiting this site. This site can provide you the latest seminar topics for any Engineering stream may it be Computer Science, Electronics, Information Technology or Mechanical.
Tuesday, February 24, 2009
DSP Processor
DefinitionThe best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter.
For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).
In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.
Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .
For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).
In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.
Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .
Jini Technology
DefinitionPart of the original vision for Java, it was put on the back burner while Sun waited for Java to gain widespread acceptance. As the Jini project revved up and more than 30technology partners signed on, it became impossible to keep it under wraps. So Sun cofounder Bill Joy, who helped dream up Jini, leaked the news to the media earlier this month. It was promptly smothered in accolades andhyperbolic prose.
When you plug a new Jini-enabled device into a network, it broadcasts a message to any lookup service on the network saying, in effect, "Here I am. Is anyone else out there?" The lookup service registers the new machine, keeps a record of its attributes and sends a message back to the Jini device, letting it know where to reach the lookup service if it needs help. So when it comes time to print, for example, the device calls the lookup service, finds what it needs and sends the job to the appropriate machine. Jini actually consists of a very small piece of Java code that runs on your computer or device.
Jini lets you dynamically move code, and not just data, from one machine to another. That means you can send a Java program to any other Jini machine and run it there, harnessing the power of any machine on your network to complete a task or run a program So far, Jini seems to offer little more than basic network services. Don't expect it to turn your household devices into supercomputers; it will take some ingenious engineering before your stereo will start dating your laptop. Jini can run on small handheld devices with little or no processing power, but these devices need to be network-enabled and need to be controlled by another Jini-enabled hardware or software piece by proxyThe first customer shipment is slated for the fall. Jini-enabled software could ship by the end of the year, and the first Jini-enabled devices could be in stores by next year.
Security. Jini will use the same security andauthentication measures as Java. Unfortunately, Java's security model hasnot been introduced yet
Microsoft.. Without Jini, Java is just a language that can run on any platform. With it, Java becomes a networked system with many of the same capabilities as a network operating system, like Windows NT. Don't expect Microsoft to support Jini.
Lucent's Inferno, a lightweight OS for connecting devices; Microsoft's Millennium, a Windows distributed computing model; and Hewlett-Packard's JetSend, a protocol that lets peripheral devices talkSun Microsystems has a dream: The future of computing will not center around the personal computer, but around the network itself. Any network will do -- your office Ethernet grid, your home-office local area network, the Internet; it doesn't matter.
Sun has carried this banner for years, and essentially cemented its network-centric computing model with the invention of the Java programming language. This week in San Francisco, Sun -- with 37big-name partners -- unveiled Jini, its latest and most ambitious initiative yet. A programming platform and connection technology, Jini is designed to allow painless immediate networking of any and all compliant electronic devices, be they personal digital assistants, cell phones, dishwashers, printers, and so on. Partnering companies include hardware and software vendors, and marquee consumer electronics players like Sony.
When you plug a new Jini-enabled device into a network, it broadcasts a message to any lookup service on the network saying, in effect, "Here I am. Is anyone else out there?" The lookup service registers the new machine, keeps a record of its attributes and sends a message back to the Jini device, letting it know where to reach the lookup service if it needs help. So when it comes time to print, for example, the device calls the lookup service, finds what it needs and sends the job to the appropriate machine. Jini actually consists of a very small piece of Java code that runs on your computer or device.
Jini lets you dynamically move code, and not just data, from one machine to another. That means you can send a Java program to any other Jini machine and run it there, harnessing the power of any machine on your network to complete a task or run a program So far, Jini seems to offer little more than basic network services. Don't expect it to turn your household devices into supercomputers; it will take some ingenious engineering before your stereo will start dating your laptop. Jini can run on small handheld devices with little or no processing power, but these devices need to be network-enabled and need to be controlled by another Jini-enabled hardware or software piece by proxyThe first customer shipment is slated for the fall. Jini-enabled software could ship by the end of the year, and the first Jini-enabled devices could be in stores by next year.
Security. Jini will use the same security andauthentication measures as Java. Unfortunately, Java's security model hasnot been introduced yet
Microsoft.. Without Jini, Java is just a language that can run on any platform. With it, Java becomes a networked system with many of the same capabilities as a network operating system, like Windows NT. Don't expect Microsoft to support Jini.
Lucent's Inferno, a lightweight OS for connecting devices; Microsoft's Millennium, a Windows distributed computing model; and Hewlett-Packard's JetSend, a protocol that lets peripheral devices talkSun Microsystems has a dream: The future of computing will not center around the personal computer, but around the network itself. Any network will do -- your office Ethernet grid, your home-office local area network, the Internet; it doesn't matter.
Sun has carried this banner for years, and essentially cemented its network-centric computing model with the invention of the Java programming language. This week in San Francisco, Sun -- with 37big-name partners -- unveiled Jini, its latest and most ambitious initiative yet. A programming platform and connection technology, Jini is designed to allow painless immediate networking of any and all compliant electronic devices, be they personal digital assistants, cell phones, dishwashers, printers, and so on. Partnering companies include hardware and software vendors, and marquee consumer electronics players like Sony.
Dual Core Processor
Definition
Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.
The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.
To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.
What is a dual core processor?A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.
In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.
An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.
Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.
The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.
To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.
What is a dual core processor?A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.
In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.
An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.
Sensors on 3D Digitization
Definition
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].
Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.
AUTOSYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].
Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.
AUTOSYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene
MIMO Wireless Channels: Capacity and Performance Prediction
Multiple-input multiple-output (MIMO) communication techniques make use of multi-element antenna arrays at both the TX and the RX side of a radio link and have been shown theoretically to drastically improve the capacity over more traditional single-input multiple output (SIMO) systems [2, 3, 5, 7]. SIMO channels in wireless networks can provide diversity gain, array gain, and interference canceling gain among other benets. In addition to these same advantages, MIMO links can offer a multiplexing gain by opening Nmin parallel spatial channels, where Nmin is the minimum of the number of TX and RX antennas. Under certain propagation conditions capacity gains proportional to Nmin can be achieved [8]. Space-time coding [14] and spatial multiplexing [1, 2, 7, 16] (a.k.a. BLAST ) are popular signal processing techniques making use of MIMO channels to improve the performance of wireless networks. Previous work and open problems. The literature on realistic MIMO channel models is still scarce. For the line-of-sight (LOS) case, previous work includes . In the fading case, previous studies have mostly been conned to i.i.d. Gaussian matrices, an idealistic assumptions in which the entries of channel matrix are independent complex Gaussian random variables [2, 6, 8]. The influence of spatial fading correlation on either the TX or the RX side of a wireless MIMO radio link has been addressed in [3, 15]. In practice, however, the realization of high MIMO capacity is sensitive not only to the fading correlation between individual antennas but also to the rank behavior of the channel. In the existing literature, high rank behavior has been loosely linked to the existence of a dense scattering environment. Recent successful demonstrations of MIMO technologies in indoor-to-indoor channels, where rich scattering is almost always guaranteed.
Definition:
MIMO is a technique for boosting wireless bandwidth and range by taking advantage of multiplexing.MIMO algorithms in a radio chipset send information out over two or more antennas. The radio signals reflect off objects, creating multiple paths that in conventional radios cause interference and fading. But MIMO uses these paths to carry more information, which is recombined on the receiving side by the MIMO algorithms.A conventional radio uses one antenna to transmit a DataStream. A typical smart antenna radio, on the other hand, uses multiple antennas. This design helps combat distortion and interference. Examples of multiple-antenna techniques include switched antenna diversity selection, radio-frequency beam forming, digital beam forming and adaptive diversity combining. These smart antenna techniques are one-dimensional, whereas MIMO is multi-dimensional. It builds on one-dimensional smart antenna technology by simultaneously transmitting multiple data streams through the same channel, which increases wireless capacity.
Definition:
MIMO is a technique for boosting wireless bandwidth and range by taking advantage of multiplexing.MIMO algorithms in a radio chipset send information out over two or more antennas. The radio signals reflect off objects, creating multiple paths that in conventional radios cause interference and fading. But MIMO uses these paths to carry more information, which is recombined on the receiving side by the MIMO algorithms.A conventional radio uses one antenna to transmit a DataStream. A typical smart antenna radio, on the other hand, uses multiple antennas. This design helps combat distortion and interference. Examples of multiple-antenna techniques include switched antenna diversity selection, radio-frequency beam forming, digital beam forming and adaptive diversity combining. These smart antenna techniques are one-dimensional, whereas MIMO is multi-dimensional. It builds on one-dimensional smart antenna technology by simultaneously transmitting multiple data streams through the same channel, which increases wireless capacity.
Unlicensed Mobile Access
Definition
During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.
However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:o UMTSo Soft MSCso IMS Data Serviceso I-WLANo IMS TelephonyThis whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "TheUMA Opportunity," available on the Kineto web site (www.kineto.com).
Mobile Network Reference Model
To best understand the role UMA plays in mobile network evolution, it is helpful to firstintroduce a reference model for today's mobile networks. Figure 1 provides a simplifiedmodel for the majority of 3GPP-based mobile networks currently in deployment. Basedon Release 99, they typically consist of the following:
o GSM/GPRS/EDGE Radio Access Network (GERAN): In mature mobile markets, theGERAN typically provides good cellular coverage throughout an operator's serviceterritory and is optimized for the delivery of high-quality circuit-based voice services.While capable of delivering mobile data (packet) services, GERAN data throughput istypically under 80Kbps and network usage cost is high.
o Circuit Core/Services: The core circuit network provides the services responsible for the vast majority of mobile revenues today. The circuit core consists of legacy Serving and Gateway Mobile Switching Centers (MSCs) providing mainstream mobile telephony services as well as a number of systems supporting the delivery of other circuit-based services including SMS, voice mail and ring tones.
o Packet Core/Services: The core packet network is responsible for providing mobile data services. The packet core consists of GPRS infrastructure (SGSNs and GGSNs) as well as a number of systems supporting the delivery of packet-based services including WAP and MMS.
Introducing UMA into Mobile Networks
For mobile and integrated operators, adding UMA to existing networks is not a major undertaking. UMA essentially defines a new radio access network (RAN), the UMA access network. Like GSM/GPRS/EDGE (GERAN) and UMTS (UTRAN) RANs, a UMA access network (UMAN) leverages well-defined, standard interfaces into an operator's existing circuit and packet core networks for service delivery. However, unlike GSM or UMTS RANs, which utilize expensive private backhaul circuits as well as costly base stations and licensed spectrum for wireless coverage, a UMAN enables operators to leverage their subscribers' existing broadband access connections for backhaul as well as inexpensive WLAN access points and unlicensed spectrum for wireless coverage.
During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.
However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:o UMTSo Soft MSCso IMS Data Serviceso I-WLANo IMS TelephonyThis whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "TheUMA Opportunity," available on the Kineto web site (www.kineto.com).
Mobile Network Reference Model
To best understand the role UMA plays in mobile network evolution, it is helpful to firstintroduce a reference model for today's mobile networks. Figure 1 provides a simplifiedmodel for the majority of 3GPP-based mobile networks currently in deployment. Basedon Release 99, they typically consist of the following:
o GSM/GPRS/EDGE Radio Access Network (GERAN): In mature mobile markets, theGERAN typically provides good cellular coverage throughout an operator's serviceterritory and is optimized for the delivery of high-quality circuit-based voice services.While capable of delivering mobile data (packet) services, GERAN data throughput istypically under 80Kbps and network usage cost is high.
o Circuit Core/Services: The core circuit network provides the services responsible for the vast majority of mobile revenues today. The circuit core consists of legacy Serving and Gateway Mobile Switching Centers (MSCs) providing mainstream mobile telephony services as well as a number of systems supporting the delivery of other circuit-based services including SMS, voice mail and ring tones.
o Packet Core/Services: The core packet network is responsible for providing mobile data services. The packet core consists of GPRS infrastructure (SGSNs and GGSNs) as well as a number of systems supporting the delivery of packet-based services including WAP and MMS.
Introducing UMA into Mobile Networks
For mobile and integrated operators, adding UMA to existing networks is not a major undertaking. UMA essentially defines a new radio access network (RAN), the UMA access network. Like GSM/GPRS/EDGE (GERAN) and UMTS (UTRAN) RANs, a UMA access network (UMAN) leverages well-defined, standard interfaces into an operator's existing circuit and packet core networks for service delivery. However, unlike GSM or UMTS RANs, which utilize expensive private backhaul circuits as well as costly base stations and licensed spectrum for wireless coverage, a UMAN enables operators to leverage their subscribers' existing broadband access connections for backhaul as well as inexpensive WLAN access points and unlicensed spectrum for wireless coverage.
Amorphous Computing and Swarm
Introduction
Amorphous computing consists of a multitude of interacting computers with modest computing power and memory, and modules for intercommunication. These collections of devices are known as swarms. The desired coherent global behaviour of the computer is achieved from the local interactions between the individual agents. The global behaviour of these vast numbers of unreliable agents is resilient to a small fraction of misbehaving agents and noisy and intimidating environment. This makes them highly useful for sensor networks, MEMS, internet nodes, etc. Presently, of the 8 billion computational units existing worldwide, only 2% of them are stand-alone computers. This proportion is projected to further decrease with the paradigm shift to the biologically inspired amorphous computing model. An insight into amorphous and swarm computing will be given in this paper.
The ideas for amorphous computing have been derived from swarm behaviour of social organisms like the ants, bees and bacteria. Recently, biologists and computer scientists studying artificial life have modelled biological swarms to understand how such social animals interact, achieve goals and evolve. A certain level of intelligence,exceeding those of the individual agents, results from the swarm behaviour. Amorphous Computing is a established with a collection of computing particles -with modest memory and computing power- spread out over a geographical space and running identical programs. Swarm Intelligence may be derived from the randomness, repulsion and unpredictability of the agents, thereby resulting in diverse solutions to the problem. There are no known criteria to evaluate swarm intelligence performance.
Inspiration
The development of swarm computing has been instilled by some of the natural phenomenon.The most complex of the activities, like optimal path finding, have been executed by simple organisms. Lately MEMS research has paved the way for manufacturing the swarm agents with low costs and high efficiency.
The biological world
In case of the ant colonies, the worker ants have decentralised control and a robust mechanism for some of the complex activities like foraging, finding the shortest path to food source and back home, build and protect nests and finding the richest food source in the locality. The ants communicate by using pheromones. Trails of pheromone are laid down by a given ant, which can be followed by other ants. Depending on the species, ants lay trails travelling from the nest, to the nest or possibly in both directions. Pheromones evaporate over time. Pheromones also accumulate with multiple ants using the same path. As the ants forage, the optimal path to food is likely to have the highest deposition of pheromones, as more number of ants follow this path and deposit pheromones. The longer paths are less likely to be travelled and therefore have only a smaller concentration of pheromones. With time, most of the ants follow the optimal path. When the food sources deplete, the pheromones evaporate and new trails can be discovered. This optimal path finding approach has a highly dynamic and robust nature.
Similar organization and behaviour are also present in the flocks of bird. For a bird to participate in a flock, it only adjusts its movements to coordinate with the movements of its flock mates, typically its neighbours that are close to it in the flock. A bird in a flock simply tries to stay close to its neighbours, but avoid collisions with them. Each bird does not take commands from any leader bird since there is no lead bird. Any bird can °y in the front, center and back of the swarm. Swarm behaviour helps birds take advantage of several things including protection from predators (especially for birds in the middle of the flock), and searching for food (essentially each bird is exploiting the eyes of every other bird). Even complex biological entities like brain are a swarm of interacting simple agents like the neurons. Each neuron does not have the holistic picture, but processes simple elements through its interaction with few other neurons and paves way for the thinking process.
Amorphous computing consists of a multitude of interacting computers with modest computing power and memory, and modules for intercommunication. These collections of devices are known as swarms. The desired coherent global behaviour of the computer is achieved from the local interactions between the individual agents. The global behaviour of these vast numbers of unreliable agents is resilient to a small fraction of misbehaving agents and noisy and intimidating environment. This makes them highly useful for sensor networks, MEMS, internet nodes, etc. Presently, of the 8 billion computational units existing worldwide, only 2% of them are stand-alone computers. This proportion is projected to further decrease with the paradigm shift to the biologically inspired amorphous computing model. An insight into amorphous and swarm computing will be given in this paper.
The ideas for amorphous computing have been derived from swarm behaviour of social organisms like the ants, bees and bacteria. Recently, biologists and computer scientists studying artificial life have modelled biological swarms to understand how such social animals interact, achieve goals and evolve. A certain level of intelligence,exceeding those of the individual agents, results from the swarm behaviour. Amorphous Computing is a established with a collection of computing particles -with modest memory and computing power- spread out over a geographical space and running identical programs. Swarm Intelligence may be derived from the randomness, repulsion and unpredictability of the agents, thereby resulting in diverse solutions to the problem. There are no known criteria to evaluate swarm intelligence performance.
Inspiration
The development of swarm computing has been instilled by some of the natural phenomenon.The most complex of the activities, like optimal path finding, have been executed by simple organisms. Lately MEMS research has paved the way for manufacturing the swarm agents with low costs and high efficiency.
The biological world
In case of the ant colonies, the worker ants have decentralised control and a robust mechanism for some of the complex activities like foraging, finding the shortest path to food source and back home, build and protect nests and finding the richest food source in the locality. The ants communicate by using pheromones. Trails of pheromone are laid down by a given ant, which can be followed by other ants. Depending on the species, ants lay trails travelling from the nest, to the nest or possibly in both directions. Pheromones evaporate over time. Pheromones also accumulate with multiple ants using the same path. As the ants forage, the optimal path to food is likely to have the highest deposition of pheromones, as more number of ants follow this path and deposit pheromones. The longer paths are less likely to be travelled and therefore have only a smaller concentration of pheromones. With time, most of the ants follow the optimal path. When the food sources deplete, the pheromones evaporate and new trails can be discovered. This optimal path finding approach has a highly dynamic and robust nature.
Similar organization and behaviour are also present in the flocks of bird. For a bird to participate in a flock, it only adjusts its movements to coordinate with the movements of its flock mates, typically its neighbours that are close to it in the flock. A bird in a flock simply tries to stay close to its neighbours, but avoid collisions with them. Each bird does not take commands from any leader bird since there is no lead bird. Any bird can °y in the front, center and back of the swarm. Swarm behaviour helps birds take advantage of several things including protection from predators (especially for birds in the middle of the flock), and searching for food (essentially each bird is exploiting the eyes of every other bird). Even complex biological entities like brain are a swarm of interacting simple agents like the neurons. Each neuron does not have the holistic picture, but processes simple elements through its interaction with few other neurons and paves way for the thinking process.
AJAX-A new Approach To Web Application
Introduction
Web application designing has by far evolved in a number of ways since the time of its birth. To make web pages more interactive various techniques have been devised both at the browser level and at the server level. The introduction of XMLHttpRequest class in the Internet Explorer 5 by Microsoft paved the way for interacting with the server using JavaScript, asynchronously. AJAX, a shorthand for Asynchronous Java And XML, is a technique which uses this MLHttpRequest object of the browser features plus the Document Object Model and DHTML and provides for making highly interactive web applications in which the entire web page need not be changed by a user action, only parts of the page are loaded dynamically by exchanging information with the server. This approach has been able to enhance the interactivity and speed of the web applications to a great extent. Interactive applications such as Google Maps, Orkut, Instant Messengers are making extensive use of this technique. This report presents an overview of the basic concepts of AJAX and how it is used in making web applications.Creating Web applications has been considered as one of the most exciting jobs under current interaction design. But, Web interaction designers can't help feel a little envious of their colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web's rapid proliferation also creates a gap between the experiences that can be provided through web applications and the experiences users can get from a desktop application.In the earliest days of the Web, designers chafed against the constraints of the medium. The entire interaction model of the Web was rooted in its heritage as a hypertext system: click the link, request the document, wait for the server to respond. Designers could not think of changing the basic foundation of the web that is, the call-response model, to improve on the web applications because of the various caveats, restrictions and compatibility issues associated with it.But the urge to enhance the responsiveness of the web applications, made the designers take up the task of making the Web work the best it could within the hypertext interaction model, developing new conventions for Web interaction that allowed their applications to reach audiences who never would have attempted to use desktop applications designed for the same tasks. The designers' came up with a technique called AJAX, shorthand for Asynchronous Java And XML, which is a web development technique for creating interactive web applications. The intent of this is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page's interactivity, speed, and usability. AJAX is not a single new technology of its own but is a bunch of several technologies, each ourishing in its own right, coming together in powerful new ways.What is AJAX?
AJAX is a set of technologies combined in an efficient manner so that the web application runs in a better way utilizing the benefits of all these simultaneously. AJAX incorporates:1. standards-based presentation using XHTML and CSS;2. dynamic display and interaction using the Document Object Model;3. data interchange and manipulation using XML and XSLT;4. asynchronous data retrieval using XMLHttpRequest;5. and JavaScript binding everything together.
Web application designing has by far evolved in a number of ways since the time of its birth. To make web pages more interactive various techniques have been devised both at the browser level and at the server level. The introduction of XMLHttpRequest class in the Internet Explorer 5 by Microsoft paved the way for interacting with the server using JavaScript, asynchronously. AJAX, a shorthand for Asynchronous Java And XML, is a technique which uses this MLHttpRequest object of the browser features plus the Document Object Model and DHTML and provides for making highly interactive web applications in which the entire web page need not be changed by a user action, only parts of the page are loaded dynamically by exchanging information with the server. This approach has been able to enhance the interactivity and speed of the web applications to a great extent. Interactive applications such as Google Maps, Orkut, Instant Messengers are making extensive use of this technique. This report presents an overview of the basic concepts of AJAX and how it is used in making web applications.Creating Web applications has been considered as one of the most exciting jobs under current interaction design. But, Web interaction designers can't help feel a little envious of their colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web's rapid proliferation also creates a gap between the experiences that can be provided through web applications and the experiences users can get from a desktop application.In the earliest days of the Web, designers chafed against the constraints of the medium. The entire interaction model of the Web was rooted in its heritage as a hypertext system: click the link, request the document, wait for the server to respond. Designers could not think of changing the basic foundation of the web that is, the call-response model, to improve on the web applications because of the various caveats, restrictions and compatibility issues associated with it.But the urge to enhance the responsiveness of the web applications, made the designers take up the task of making the Web work the best it could within the hypertext interaction model, developing new conventions for Web interaction that allowed their applications to reach audiences who never would have attempted to use desktop applications designed for the same tasks. The designers' came up with a technique called AJAX, shorthand for Asynchronous Java And XML, which is a web development technique for creating interactive web applications. The intent of this is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page's interactivity, speed, and usability. AJAX is not a single new technology of its own but is a bunch of several technologies, each ourishing in its own right, coming together in powerful new ways.What is AJAX?
AJAX is a set of technologies combined in an efficient manner so that the web application runs in a better way utilizing the benefits of all these simultaneously. AJAX incorporates:1. standards-based presentation using XHTML and CSS;2. dynamic display and interaction using the Document Object Model;3. data interchange and manipulation using XML and XSLT;4. asynchronous data retrieval using XMLHttpRequest;5. and JavaScript binding everything together.
Pivot VectorSpace Approach in Audio-Video Mixing
Definition
The PIVOT VECTOR SPACE APPROACH is a novel technique of audio-video mixing which automatically selects the best audio clip from the available database, to be mixed with the given video shot. Till the development of this technique, audio-video mixing is a process that could be done only by professional audio-mixing artists. However employing these artists is very expensive and is not feasible for home video mixing. Besides, the process is time-consuming and tedious.
In today's era, significant advances are happening constantly in the field of Information Technology. The development in the IT related fields such as multimedia is extremely vast. This is evident with the release of a variety of multimedia products such as mobile handsets, portable MP3 players, digital video camcorders, handicams etc. Hence, certain activities such as production of home videos is easy due to products such as handicams, digital video camcorders etc. Such a scenario was not there a decade ago ,since no such products were available in the market. As a result production of home videos is not possible since it was reserved completely for professional video artists.
So in today's world, a large amount of home videos are being made and the number of amateur and home video enthusiasts is very large.A home video artist can never match the aesthetic capabilities of a professional audio mixing artist. However employing a professional mixing artist to develop home video is not feasible as it is expensive, tedious and time consuming.
The PIVOT VECTOR SPACE APPROACH is a technique that all amateur and home video enthusiasts can use in the creation of video footage that gives a professional look and feel. This technique saves cost and is fast. Since it is fully automatic, the user need not worry about his aesthetic capabilities. The PIVOT VECTOR SPACE APPROACH uses a pivot vector space mixing framework to incorporate the artistic heuristics for mixing audio with video .These artistic heuristics use high level perceptual descriptors of audio and video characteristics. Low-level signal processing techniques compute these descriptors. Video Aesthetic Features
The table shows, from the cinematic point of view,a set of attributed features(such as color and motion) required to describe videos.The computations for extracting aesthetic attributed features from low-level video features occur at the video shot granularity. Because some attributed features are based on still images(such as high light falloff),we compute them on the key frame of a video shot. We try to optimize the trade-off in accuracy and computational efficiency among the competing extraction methods. Also, even though we assume that the videos considered come in the MPEG format(widely used by several home video camcorders),the features exist independently of a particular representation format.
The PIVOT VECTOR SPACE APPROACH is a novel technique of audio-video mixing which automatically selects the best audio clip from the available database, to be mixed with the given video shot. Till the development of this technique, audio-video mixing is a process that could be done only by professional audio-mixing artists. However employing these artists is very expensive and is not feasible for home video mixing. Besides, the process is time-consuming and tedious.
In today's era, significant advances are happening constantly in the field of Information Technology. The development in the IT related fields such as multimedia is extremely vast. This is evident with the release of a variety of multimedia products such as mobile handsets, portable MP3 players, digital video camcorders, handicams etc. Hence, certain activities such as production of home videos is easy due to products such as handicams, digital video camcorders etc. Such a scenario was not there a decade ago ,since no such products were available in the market. As a result production of home videos is not possible since it was reserved completely for professional video artists.
So in today's world, a large amount of home videos are being made and the number of amateur and home video enthusiasts is very large.A home video artist can never match the aesthetic capabilities of a professional audio mixing artist. However employing a professional mixing artist to develop home video is not feasible as it is expensive, tedious and time consuming.
The PIVOT VECTOR SPACE APPROACH is a technique that all amateur and home video enthusiasts can use in the creation of video footage that gives a professional look and feel. This technique saves cost and is fast. Since it is fully automatic, the user need not worry about his aesthetic capabilities. The PIVOT VECTOR SPACE APPROACH uses a pivot vector space mixing framework to incorporate the artistic heuristics for mixing audio with video .These artistic heuristics use high level perceptual descriptors of audio and video characteristics. Low-level signal processing techniques compute these descriptors. Video Aesthetic Features
The table shows, from the cinematic point of view,a set of attributed features(such as color and motion) required to describe videos.The computations for extracting aesthetic attributed features from low-level video features occur at the video shot granularity. Because some attributed features are based on still images(such as high light falloff),we compute them on the key frame of a video shot. We try to optimize the trade-off in accuracy and computational efficiency among the competing extraction methods. Also, even though we assume that the videos considered come in the MPEG format(widely used by several home video camcorders),the features exist independently of a particular representation format.
Alternative Models of Computing
Introduction
The seminar aims at introducing various other forms of computation methods. Concepts of quantum computing ,DNA computing have been introduced and discussed . Particular algorithms (like the Shor's algorithm) have been discussed. Solution of Traveling alesman problem using DNA computing has also been discussed . In ¯ne,the seminar aims opening windows to topics that may become tomorrow's mainstay in computer science.
Richard Feynman thought up the idea of a 'quantum computer', a computer that uses the e®ects of quantum mechanics to its advantage .Initially, the idea of a 'quantum computer' was primarily of theoretical interest only, but recent developments have bought the idea to foreground. To start with, was the invention of an algorithm to factor large numbers on a quantum computer, by Peter Shor ,from Bell labs . By using this algorithm, a quantum computer would be able to crack codes much more quickly than any ordinary (or classical) computer could.In fact a quantum computer capable of performing Shor's algorithm would be able to break current cryptography techniques(like the RSA) in a matter of seconds. With the motivation provided by this algorithm, the quantum computing has gathered momentum and is a hot topic for research around the globe. Leonard M. Adleman solved an unremarkable computational problem with an exceptional technique. He had used 'mapping' to solve TSP. It was a problem that an average desktop machine could solve in fraction of a second. Adleman, however took , seven days to find a solution. Even then his work was exceptional, because he solved the problem with DNA. It was a breakthroughand a landmark demonstration of computing on the molecular level.
In case of quantum computing and DNA computing ,both have two aspects.Firstly building a computer and secondly deploying the computer for solving problems that are tough to solve in the present domain of Von Neumann architecture. In the seminar we would consider the later.
Shor's Algorithm
Shor's algorithm is based on a result from number theory. Which states : The functionf(a) = x pow a mod nis a periodic function, where x and n are coprime . In the context of Shor's algorithm n is the number we wish to factor. By saying we mean that their greatest common divisor is one.If implemented, it will have a profound e®ect on cryptography, as it would compromise the security provided by public key encryption (such as RSA).We all know that the security lies in the 'hard' factoring problem. Shor's algorithm makes it simple using quantum computing techniques.
The seminar aims at introducing various other forms of computation methods. Concepts of quantum computing ,DNA computing have been introduced and discussed . Particular algorithms (like the Shor's algorithm) have been discussed. Solution of Traveling alesman problem using DNA computing has also been discussed . In ¯ne,the seminar aims opening windows to topics that may become tomorrow's mainstay in computer science.
Richard Feynman thought up the idea of a 'quantum computer', a computer that uses the e®ects of quantum mechanics to its advantage .Initially, the idea of a 'quantum computer' was primarily of theoretical interest only, but recent developments have bought the idea to foreground. To start with, was the invention of an algorithm to factor large numbers on a quantum computer, by Peter Shor ,from Bell labs . By using this algorithm, a quantum computer would be able to crack codes much more quickly than any ordinary (or classical) computer could.In fact a quantum computer capable of performing Shor's algorithm would be able to break current cryptography techniques(like the RSA) in a matter of seconds. With the motivation provided by this algorithm, the quantum computing has gathered momentum and is a hot topic for research around the globe. Leonard M. Adleman solved an unremarkable computational problem with an exceptional technique. He had used 'mapping' to solve TSP. It was a problem that an average desktop machine could solve in fraction of a second. Adleman, however took , seven days to find a solution. Even then his work was exceptional, because he solved the problem with DNA. It was a breakthroughand a landmark demonstration of computing on the molecular level.
In case of quantum computing and DNA computing ,both have two aspects.Firstly building a computer and secondly deploying the computer for solving problems that are tough to solve in the present domain of Von Neumann architecture. In the seminar we would consider the later.
Shor's Algorithm
Shor's algorithm is based on a result from number theory. Which states : The functionf(a) = x pow a mod nis a periodic function, where x and n are coprime . In the context of Shor's algorithm n is the number we wish to factor. By saying we mean that their greatest common divisor is one.If implemented, it will have a profound e®ect on cryptography, as it would compromise the security provided by public key encryption (such as RSA).We all know that the security lies in the 'hard' factoring problem. Shor's algorithm makes it simple using quantum computing techniques.
Saturday, February 21, 2009
EDI
EDI has no single consensus definition .Two generally accepted definitions are : Standardized format for communication of business information between computer applications . Computer- to- computer exchange of information between companies, using an industry standard format.In short , Electronic Data Interchange (EDI) is the computer-to-computer exchange of business information using a public standard. EDI is a central part of Electronic Commerce (EC), because it enables businesses to exchange business information electronically much faster, cheaper and more accurately than is possible using paper-based systems. Electronic Data Interchange, consists of data that has been put into a standard format and is electronically transferred between trading partners.Often ,an acknowledgement is returned to the sender informing them that the data was received. The term EDI is often used synonymously with the term EDT. These two terms are indeed different and should not be used interchangeably.EDI VS EDTThe terms EDI and EDT are often misused .¢ EDT, Electronic Data Transfer, is simply sending a file electronically to a trading partner.¢ Although EDI documents are sent electronically, they are sent in a standard format.This standard format is what makes EDI different than EDT.HISTORY OF EDIThe government did not invent EC/EDI; it is merely taking advantage of an established technology that has been widely used in the private sector for the last few decades. EDI was first used in the transportation industry more than 20 years ago. Ocean, motor, air, and rail carriers and the associated shippers, brokers, customs, freight forwarders, and bankers used it.Developed in 1960 s to accelerate movement of documents.Widely employed in automotive , retail , transportation & international trade since mid-80s .Steadily growing.EDI FEATURES# Independent of trading partners internal computerized application systems.# Interfaces with internal application systems rather than being integrated with them.# Not limited by differences in computer or communications equipment of trading companies.# Consists only of business data, not verbiage or free-form messages.Let s take a high level look at the EDI process. In a typical example , a car manufacturing company is a trading partner with an insurance company. The human resources department at the car manufacturing company has a new employee who needs to be enrolled in an insurance plan. The HR representative enters the individual into the computer. The new employee s data is mapped into a standard format and sent electronically to the insurance company. The insurance company maps the data out of the standard format and into a format that is usable with their computer. An acknowledgment is automatically generated by the insurance company and sent to the car manufacturer informing them that the data was received.Hence, in order to summarise the EDI process , the sequence of events in any EDI transaction are as follows :The sender s own business application system assembles the data to be transmitted .This data is translated into an EDI standard format (i.e., transaction set) .The transaction set is transmitted either through a third party network ( eg : VAN) or directly to the receiver s EDI translation system .The transaction set, in EDI standard format, is translated into files that are usable by the receiver s business application system .The files are processed using the receiver s business application system .
The SAT(Sim Application Toolkit)
The SAT (SIM Application Toolkit) provides a flexible interface through which developers can build services and MMI (Man Machine Interface) in order to enhance the functionality of the mobile. This module is not designed for service developers, but network engineers who require a grounding in the concepts of the SAT and how it may impact on network architecture and performance. It explores the basic SAT interface along with the architecture required in order to deliver effective SAT based services to the handset.
Real Time Operating System
Within the last ten years real-time systems research has been transformed from a niche industry into a mainstream enterprise with clients in a wide variety of industries and academic disciplines. It will continue to grow in importance and affect an increasing number of industries as many of the reasons for the rise of its prominence will persist for the foreseeable future.What is RTOS?Real Time Computing and Real Time Operating Systems( RTOS ) is an emerging discipline in software engineering. This is an embedded technology wherebythe application software does the dual function of operating system also. In RTOS thecorrectness of the system depends not only on the logical result but also on the time atwhich the results are obtained.Real-time System>> Provides deterministic response to external events>> Has the ability to process data at its rate of occurrence>> Is deterministic in its functional & timing behavior>> Whose timing is analyzed in the worst cases not in the typical, normal cases to guarantee a limiting response in any circumstances. The seminar will basically provide a practical understanding of the goals, structure and operation of a real-time operating system (RTOS). The basic concepts of real-time system like the RTOS Kernel will be given a detailed description. The structure of the kernel is discussed, stressing the factors which affect response times and performance. Examples of RTOS functions such as scheduling, interrupt processing and intertask communication structures will also be discussed. Features of commercially available RTOS products are also presented.A real-time system is one where the timeliness of the result of a calculation is important Examples include military weapons systems, factory control systems, and Internet video and audio streaming. Different definitions of real-time systems exist. Here are just a few:- Real-time computing is computing where system correctness depends not only on the correctness of the logical result of the computation but also on the result delivery time.- A Real-Time System is an interactive system that maintains an on-going relationship with an asynchronous environment, i.e. an environment that progresses irrespective of the Real Time System, in an uncooperative manner.- Real-time (software) (IEEE 610.12 - 1990): Pertaining a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results may be used to control, monitor, or respond in a timely manner to the external process.From the above definitions its understood that in Real Time Systems, the TIME is the biggest constraint. This makes real time systems different from ordinary systems. Thus in RTS data needs to be processed at some regular and timely rate. Also it should have fast response to events occurring at nonregular rates. In real world systems there is some delay between presentation of inputs and appearance of all associated outputs called the Response time. Thus a real time system must satisfy explicit response time constraints or risk severe consequences including failure.Real - Time Systems and Real - Time Operating SystemsTimeliness is the single most important aspect of a real -time system. These systems respond to a series of external inputs, which arrive in an unpredictable fashion. The real-time systems process these inputs, take appropriate decis ions and also generate output necessary to control the peripherals connected to them. As defined by Donald Gillies A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time in which the result is produced. If the timing constraints are not met, system failure is said to have occurred. It is essential that the timing constraints of the system are guaranteed to be met. Guaranteeing timing behaviour requires that the system be predictable.The design of a real -time system must specify the timing requirements of the system and ensure that the system performance is both correct and timely. There are three types of time constraints:¢ Hard: A late response is incor rect and implies a system failure. An example of such a system is of medical equipment monitoring vital functions of a human body, where a late response would be considered as a failure.¢ Soft: Timeliness requirements are defined by using an average respons e time. If a single computation is late, it is not usually significant, although repeated late computation can result in system failures. An example of such a system includes airlines reservation systems.¢ Firm: This is a combination of both hard and soft t imeliness requirements. The computation has a shorter soft requirement and a longer hard requirement. For example, a patient ventilator must mechanically ventilate the patient a certain amount in a given time period. A few seconds delay in the initiation of breath is allowed, but not more than that.One need to distinguish between on -line systems such as an airline reservation system, which operates in real-time but with much less severe timeliness constraints than, say, a missile control system or a telephone switch. An interactive system with better response time is not a real-time system. These types of systems are often referred to as soft real time systems. In a soft real -time system (such as the airline reservation system) late data is still good dat a. However, for hard real -time systems, late data is bad data. In this paper we concentrate on the hard and firm real-time systems only.Most real -time systems interface with and control hardware directly. The software for such systems is mostly custom -developed. Real -time Applications can be either embedded applications or non -embedded (desktop) applications. Real -time systems often do not have standard peripherals associated with a desktop computer, namely the keyboard, mouse or conventional display monitors. In most instances, real-time systems have a customized version of these devices.
Biometrics
Biometrics literally means life measurement. Biometrics is the science and technology of measuring and statistically analyzing biological data. In information technology, biometrics usually refers to technologies for measuring and analyzing human body characteristics such as fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand measurements, especially for authenticating someone. Often seen in science-fiction action adventure movies, face pattern matchers and body scanners may emerge as replacements for computer passwords So, Biometric systems can be defined as automated methods of verifying or recognizing the identity of a living person based on a physiological or behavior characteristic .Automated methods By this we mean that the analysis of the data is done by a computer with little or no human intervention. Traditional fingerprint matching and showing your drivers license or other forms of photo ID when proving your identity are examples of such systems.Verification and recognition This sets forth the two principal applications of biometric systems. Verification is where the user lays claim to an identity and the system decides whether they are who they say they are. It s analogous to a challenge/response protocol; the system challenges the user to prove their identity, and they respond by providing the biometric to do so. Recognition is where the user presents the biometric, and the system scans a database and determines the identity of the user automatically.Living person This points out the need to prevent attacks where copy of the biometric of an authorized user is presented. Biometric systems should also prevent unauthorized users from gaining access when they are in possession of the body part of an authorized user necessary for the measurement.Physiological and behavioral characteristics This defines the two main classes of biometrics. Physiological characteristics are physical traits, like fingerprint or retina that are direct parts of the body. Behavioral characteristics are those that are based upon what we do, such as voiceprint and typing patterns. While physiological traits are usually more stabile than behavioral traits, systems using them are typically more intrusive and more expensive to implement.
E-Commerce
E-commerce is the application of information technology to support business processes and the exchange of goods and services. E-cash came into being when people began to think that if we can store, forward and manipulate information, why can t we do the same with money. Both blanks and post offices centralise distribution, information and credibility. E-money makes it possible to decentralise these functions.Electronic data interchange, which is the subset of e-com, is a set of data definitions that permits business forms to be exchanged electronically. The different payment schemes E-cash, Net-cash and PayMe system and also smart card technology is also. The foundation of all requirements for commerce over the world wide web is secured system of payment so various security measures are adopted over the Internet.E-commerce represents a market worth potentiality hundreds of billions of dollars in just a few years to come. So it provides enormous opportunities for business. It is expected that in near future, electronic transaction will be as popular, if not more that the credit card purchases today.Business is about information. It is about the right people having the right information at the right time. Exchanging the information efficiently and accurately will determine the success of the business.There are three phases of implementation of E-Commerce.Replace manual and paper-based operations with electronic alternativesRethink and simplify the information flowsUse the information flows in new and dynamic waysSimply replacing the existing paper-based system will reap new benefits. It may reduce administrative costs and improve the level of accuracy in exchanging data, but it does not address doing business efficiently. E-Commerce application can help to reshape the ways to do business.
Rapid Prototyping
In the manufacturing arena, productivity is achieved by guiding a product from concept to market quickly and inexpensively. In most of the industries physical models called prototypes are invariably prepared and subjected to various tests as part of the design evaluation process. Conventional prototyping may take weeks or even months, depending on the method used. Therefore people thought of developing processes that would directly give the physical prototype from the CAD model without going through the various manufacturing steps. This led to the development of a class of processes that are known as Rapid prototyping.Rapid prototyping automates the fabrication of a prototype part from a three-dimensional (3D) CAD drawing. Rapid prototyping can be a quicker, more cost-effective means of building prototypes as opposed to conventional methods.
Internet Telephony
The internet has begun as a communication network to satisfy the collaboration requirement of the government, the universities and corporate researches. Till now the internet has been optimized for efficient data communication between computers This immense success of data transmission over the packet switched network has led to the idea of transmitting voice over the internet . The term internet telephony has evolved to infer a range of different services. In general it refers to the transport of real -time media such as voice and video over the internet to provide interactive communication among the internet users. He parties involved may access the internet via PC a stand -alone internet protocol (IP) enabled device or even by dialing up to a gateway from the handset of a traditional public switched telephone network (PSTN).It introduces entirely new and enhanced way of communication.IP telephony involves the use of the internet to transmit real -time voice from one PC to another PC or a telephone. The technology involves digitisation of speech and splitting it into data packets that are transmitted over the internet . The compressed data is then re-assembled at the receiving end .This differs from the conventional public switched telephone network (PSTN), since the communication and transmission are performed across IP networks as against conventional switched networks.
Java Ring
A Java Ring is a finger ring that contains a small microprocessor with built-in capabilities for the user, a sort of smart card that is wearable on a finger. Sun Microsystem s Java Ring was introduced at their JavaOne Conference in 1998 and, instead of a gemstone, contained an inexpensive microprocessor in a stainless-steel iButton running a Java virtual machine and preloaded with applets (little application programs). The rings were built by Dallas Semiconductor. Workstations at the conference had ring readers installed on them that downloaded information about the user from the conference registration system. This information was then used to enable a number of personalized services. For example, a robotic machine made coffee according to user preferences, which it downloaded when they snapped the ring into another ring reader. Although Java Rings aren t widely used yet, such rings or similar devices could have a number of real-world applications, such as starting your car and having all your vehicle s components (such as the seat, mirrors, and radio selections) automatically adjust to your preferences.The Java Ring is an extremely secure Java-powered electronic token with a continuously running, unalterable real-time clock and rugged packaging, suitable for many applications. The jewel of the Java Ring is the Java iButton -- a one-million transistor, single chip trusted microcomputer with a powerful Java Virtual Machine (JVM) housed in a rugged and secure stainless-steel case. The Java Ring is a stainless-steel ring, 16-millimeters (0.6 inches) in diameter, that houses a 1-million-transistor processor, called an iButton. The ring has 134 KB of RAM, 32 KB of ROM, a real-time clock and a Java virtual machine, which is a piece of software that recognizes the Java language and translates it for the user s computer system.The Ring, first introduced at JavaOne Conference, has been tested at Celebration School, an innovative K-12 school just outside Orlando, FL. The rings given to students are programmed with Java applets that communicate with host applications on networked systems. Applets are small applications that are designed to be run within another application. The Java Ring is snapped into a reader, called a Blue Dot receptor, to allow communication between a host system and the Java Ring. Designed to be fully compatible with the Java Card 2.0 standard the processor features a high-speed 1024-bit modular exponentiator fro RSA encryption, large RAM and ROM memory capacity, and an unalterable real time clock. The packaged module has only a single electric contact and a ground return, conforming to the specifications of the Dallas Semiconductor 1-Wire bus. Lithium-backed non-volatile SRAM offers high read/write speed and unparallel tamper resistance through near-instantaneous clearing of all memory when tampering is detected, a feature known as rapid zeroization.Data integrity and clock function are maintained for more than 10 years. The 16-millimeter diameter stainless steel enclosure accomodates the larger chip sizes needed for up to 128 kilobytes of high-speed nonvolatile static RAM. The small and extremely rugged packaging of the module allows it to attach to the accessory of your choice to match individual lifestyles, such as key fob, wallet, watch, necklace, bracelet, or finger ring.
Cell Phone Viruses and Security
As cell phones become a part and parcel of our life so do the threats imposed to them is also on the increase. Like the internet, today even the cell phones are going online with the technologies like the edge, GPRS etc. This online network of cellphones has exposed them to the high risks caused by malwares viruses, worms and Trojans designed for mobile phone environment. The security threat caused by these malwares are so severe that a time would soon come that the hackers could infect mobile phones with malicious software that will delete any personal data or can run up a victim s phone bill by making toll calls.All these can lead to overload in mobile networks, which can eventually lead them to crash and then the financial data stealing which poises risk factors for smart phones. As the mobile technology is comparatively new and still on the developing stages compared to that of internet technology, the anti virus companies along with the vendors of phones and mobile operating systems have intensified the research and development activities on this growing threat, with a more serious perspective.
10 Gigabit Ethernet
Definition
From its origin more than 25 years ago, Ethernet has evolved to meet the increasing demands of packet-switched networks. Due to its proven low implementation cost, its known reliability, and relative simplicity of installation and maintenance, its popularity has grown to the point that today nearly all traffic on the Internet originates or ends with an Ethernet connection. Further, as the demand for ever-faster network speeds has grown, Ethernet has been adapted to handle these higher speeds and the concomitant surges in volume demand that accompany them.
The One Gigabit Ethernet standard is already being deployed in large numbers in both corporate and public data networks, and has begun to move Ethernet from the realm of the local area network out to encompass the metro area network. Meanwhile, an even faster 10 Gigabit Ethernet standard is nearing completion. This latest standard is being driven not only by the increase in normal data traffic but also by the proliferation of new, bandwidth-intensive applications.
The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily in that it will only function over optical fiber, and only operate in full-duplex mode, meaning that collision detection protocols are unnecessary. Ethernet can now step up to 10 gigabits per second, however, it remains Ethernet, including the packet format, and the current capabilities are easily transferable to the new draft standard.
In addition, 10 Gigabit Ethernet does not obsolete current investments in network infrastructure. The task force heading the standards effort has taken steps to ensure that 10 Gigabit Ethernet is interoperable with other networking technologies such as SONET. The standard enables Ethernet packets to travel across SONET links with very little inefficiency.
Ethernet's expansion for use in metro area networks can now be expanded yet again onto wide area networks, both in concert with SONET and also end-to-end Ethernet. With the current balance of network traffic today heavily favoring packet-switched data over voice, it is expected that the new 10 Gigabit Ethernet standard will help to create a convergence between networks designed primarily for voice, and the new data centric networks.10 Gigabit Ethernet Technology Overview
The 10 Gigabit Ethernet Alliance (10GEA) was established in order to promote standards-based 10 Gigabit Ethernet technology and to encourage the use and implementation of 10 Gigabit Ethernet as a key networking technology for connecting various computing, data and telecommunications devices. The charter of the 10 Gigabit Ethernet Alliance includes:
" Supporting the 10 Gigabit Ethernet standards effort conducted in the IEEE 802.3 working group
" Contributing resources to facilitate convergence and consensus on technical specifications
" Promoting industry awareness, acceptance, and advancement of the 10 Gigabit Ethernet standard
" Accelerating the adoption and usage of 10 Gigabit Ethernet products and services
" Providing resources to establish and demonstrate multi-vendor interoperability and generally encourage and promote interoperability and interoperability events
From its origin more than 25 years ago, Ethernet has evolved to meet the increasing demands of packet-switched networks. Due to its proven low implementation cost, its known reliability, and relative simplicity of installation and maintenance, its popularity has grown to the point that today nearly all traffic on the Internet originates or ends with an Ethernet connection. Further, as the demand for ever-faster network speeds has grown, Ethernet has been adapted to handle these higher speeds and the concomitant surges in volume demand that accompany them.
The One Gigabit Ethernet standard is already being deployed in large numbers in both corporate and public data networks, and has begun to move Ethernet from the realm of the local area network out to encompass the metro area network. Meanwhile, an even faster 10 Gigabit Ethernet standard is nearing completion. This latest standard is being driven not only by the increase in normal data traffic but also by the proliferation of new, bandwidth-intensive applications.
The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily in that it will only function over optical fiber, and only operate in full-duplex mode, meaning that collision detection protocols are unnecessary. Ethernet can now step up to 10 gigabits per second, however, it remains Ethernet, including the packet format, and the current capabilities are easily transferable to the new draft standard.
In addition, 10 Gigabit Ethernet does not obsolete current investments in network infrastructure. The task force heading the standards effort has taken steps to ensure that 10 Gigabit Ethernet is interoperable with other networking technologies such as SONET. The standard enables Ethernet packets to travel across SONET links with very little inefficiency.
Ethernet's expansion for use in metro area networks can now be expanded yet again onto wide area networks, both in concert with SONET and also end-to-end Ethernet. With the current balance of network traffic today heavily favoring packet-switched data over voice, it is expected that the new 10 Gigabit Ethernet standard will help to create a convergence between networks designed primarily for voice, and the new data centric networks.10 Gigabit Ethernet Technology Overview
The 10 Gigabit Ethernet Alliance (10GEA) was established in order to promote standards-based 10 Gigabit Ethernet technology and to encourage the use and implementation of 10 Gigabit Ethernet as a key networking technology for connecting various computing, data and telecommunications devices. The charter of the 10 Gigabit Ethernet Alliance includes:
" Supporting the 10 Gigabit Ethernet standards effort conducted in the IEEE 802.3 working group
" Contributing resources to facilitate convergence and consensus on technical specifications
" Promoting industry awareness, acceptance, and advancement of the 10 Gigabit Ethernet standard
" Accelerating the adoption and usage of 10 Gigabit Ethernet products and services
" Providing resources to establish and demonstrate multi-vendor interoperability and generally encourage and promote interoperability and interoperability events
Robotic Surgery
Definition
The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery.
The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures.
Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way.
The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors.Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line.
We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.
The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery.
The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures.
Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way.
The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors.Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line.
We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.
Socket Programming
Definition
Sockets are interfaces that can "plug into" each other over a network. Once so "plugged in", the programs so connected communicate. A "server" program is exposed via a socket connected to a certain /etc/services port number. A "client" program can then connect its own socket to the server's socket, at which time the client program's writes to the socket are read as stdin to the server program, and stdout from the server program are read from the client's socket reads.
Before a user process can perform I/O operations, it calls Open to specify and obtain permissions for the file or device to be used. Once an object has been opened, the user process makes one or more calls to Read or Write data. Read reads data from the object and transfers it to the user process, while Write transfers data from the user process to the object. After all transfer operations are complete, the user process calls Close to inform the operating system that it has finished using that object.
When facilities for InterProcess Communication (IPC) and networking were added, the idea was to make the interface to IPC similar to that of file I/O. In Unix, a process has a set of I/O descriptors that one reads from and writes to. These descriptors may refer to files, devices, or communication channels (sockets). The lifetime of a descriptor is made up of three phases: creation (open socket), reading and writing (receive and send to socket), and destruction (close socket).
History Sockets are used nearly everywhere, but are one of the most severely misunderstood technologies around. This is a 10,000 foot overview of sockets. It's not really a tutorial - you'll still have work to do in getting things working. It doesn't cover the fine points (and there are a lot of them), but I hope it will give you enough background to begin using them decently.I'm only going to talk about INET sockets, but they account for at least 99% of the sockets in use. And I'll only talk about STREAM sockets - unless you really know what you're doing (in which case this HOWTO isn't for you!), you'll get better behavior and performance from a STREAM socket than anything else. I will try to clear up the mystery of what a socket is, as well as some hints on how to work with blocking and non-blocking sockets. But I'll start by talking about blocking sockets. You'll need to know how they work before dealing with non-blocking sockets.
Part of the trouble with understanding these things is that "socket" can mean a number of subtly different things, depending on context. So first, let's make a distinction between a "client" socket - an endpoint of a conversation, and a "server" socket, which is more like a switchboard operator. The client application (your browser, for example) uses "client" sockets exclusively; the web server it's talking to uses both "server" sockets and "client" sockets. Of the various forms of IPC (Inter Process Communication), sockets are by far the most popular. On any given platform, there are likely to be other forms of IPC that are faster, but for cross-platform communication, sockets are about the only game in town.
They were invented in Berkeley as part of the BSD flavor of Unix. They spread like wildfire with the Internet. With good reason -- the combination of sockets with INET makes talking to arbitrary machines around the world unbelievably easy (at least compared to other schemes).
Sockets are interfaces that can "plug into" each other over a network. Once so "plugged in", the programs so connected communicate. A "server" program is exposed via a socket connected to a certain /etc/services port number. A "client" program can then connect its own socket to the server's socket, at which time the client program's writes to the socket are read as stdin to the server program, and stdout from the server program are read from the client's socket reads.
Before a user process can perform I/O operations, it calls Open to specify and obtain permissions for the file or device to be used. Once an object has been opened, the user process makes one or more calls to Read or Write data. Read reads data from the object and transfers it to the user process, while Write transfers data from the user process to the object. After all transfer operations are complete, the user process calls Close to inform the operating system that it has finished using that object.
When facilities for InterProcess Communication (IPC) and networking were added, the idea was to make the interface to IPC similar to that of file I/O. In Unix, a process has a set of I/O descriptors that one reads from and writes to. These descriptors may refer to files, devices, or communication channels (sockets). The lifetime of a descriptor is made up of three phases: creation (open socket), reading and writing (receive and send to socket), and destruction (close socket).
History Sockets are used nearly everywhere, but are one of the most severely misunderstood technologies around. This is a 10,000 foot overview of sockets. It's not really a tutorial - you'll still have work to do in getting things working. It doesn't cover the fine points (and there are a lot of them), but I hope it will give you enough background to begin using them decently.I'm only going to talk about INET sockets, but they account for at least 99% of the sockets in use. And I'll only talk about STREAM sockets - unless you really know what you're doing (in which case this HOWTO isn't for you!), you'll get better behavior and performance from a STREAM socket than anything else. I will try to clear up the mystery of what a socket is, as well as some hints on how to work with blocking and non-blocking sockets. But I'll start by talking about blocking sockets. You'll need to know how they work before dealing with non-blocking sockets.
Part of the trouble with understanding these things is that "socket" can mean a number of subtly different things, depending on context. So first, let's make a distinction between a "client" socket - an endpoint of a conversation, and a "server" socket, which is more like a switchboard operator. The client application (your browser, for example) uses "client" sockets exclusively; the web server it's talking to uses both "server" sockets and "client" sockets. Of the various forms of IPC (Inter Process Communication), sockets are by far the most popular. On any given platform, there are likely to be other forms of IPC that are faster, but for cross-platform communication, sockets are about the only game in town.
They were invented in Berkeley as part of the BSD flavor of Unix. They spread like wildfire with the Internet. With good reason -- the combination of sockets with INET makes talking to arbitrary machines around the world unbelievably easy (at least compared to other schemes).
Intelligent Software Agents
Definition
Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.
Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.
If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.
The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.
Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.
1.2 DEFINITION OF INTELLIGENT SOFTWARE AGENTS:
Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as:
"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."
Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.
Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.
If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.
The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.
Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.
1.2 DEFINITION OF INTELLIGENT SOFTWARE AGENTS:
Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as:
"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."
TouchScreens
Introduction
A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen. Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use.
Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information
A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you.
History Of Touch Screen TechnologyA touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen.
Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection.Specific areas of the screen are defined as "buttons" that the operator selects simply by touching them. One significant advantage to touch screen applications is that each screen can be customized to reflect only the valid options for each phase of an operation, greatly reducing the frustration of hunting for the right key or function.
Pen-based systems, such as the Palm Pilot® and signature capture systems, also use touch technology but are not included in this article. The essential difference is that the pressure levels are set higher for pen-based systems than for touch.Touch screens come in a wide range of options, from full color VGA and SVGA monitors designed for highly graphic Windows® or Macintosh® applications to small monochrome displays designed for keypad replacement and enhancement.
Specific figures on the growth of touch screen technology are hard to come by, but a 1995 study last year by Venture Development Corporation predicted overall growth of 17%, with at least 10% in the industrial sector.Other vendors agree that touch screen technology is becoming more popular because of its ease-of-use, proven reliability, expanded functionality, and decreasing cost.
A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen causes a voltage or signal change. This voltage change is used to determine the location of the touch to the screen.
A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen. Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use.
Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information
A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you.
History Of Touch Screen TechnologyA touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen.
Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection.Specific areas of the screen are defined as "buttons" that the operator selects simply by touching them. One significant advantage to touch screen applications is that each screen can be customized to reflect only the valid options for each phase of an operation, greatly reducing the frustration of hunting for the right key or function.
Pen-based systems, such as the Palm Pilot® and signature capture systems, also use touch technology but are not included in this article. The essential difference is that the pressure levels are set higher for pen-based systems than for touch.Touch screens come in a wide range of options, from full color VGA and SVGA monitors designed for highly graphic Windows® or Macintosh® applications to small monochrome displays designed for keypad replacement and enhancement.
Specific figures on the growth of touch screen technology are hard to come by, but a 1995 study last year by Venture Development Corporation predicted overall growth of 17%, with at least 10% in the industrial sector.Other vendors agree that touch screen technology is becoming more popular because of its ease-of-use, proven reliability, expanded functionality, and decreasing cost.
A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen causes a voltage or signal change. This voltage change is used to determine the location of the touch to the screen.
CyberTerrorism
Definition
Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least.
The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them.
The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist.
We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, theconsequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future…
Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least.
The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them.
The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist.
We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, theconsequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future…
WINDOWS DNA
Definition
For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.
The increased presence of Internet technologies is enabling global sharing of information-not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.
It is possible to think of these new Internet applications needing to handle literally millions of users-a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.
Introducing Windows DNA: Framework for a New Generation of Computing Solutions
Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.
Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?
For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.
The increased presence of Internet technologies is enabling global sharing of information-not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.
It is possible to think of these new Internet applications needing to handle literally millions of users-a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.
Introducing Windows DNA: Framework for a New Generation of Computing Solutions
Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.
Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?
DNA Chips
Introduction
DNA chips also known as micro arrays are very significant technological development in molecular biology and are perhaps most efficient tool available for functional genomics today. An evident from the name micro array essentially consists of an array of either Oligonucleotides or cDNA fixed on a substrate. There has been an explosion of information in the field of genomics in the last five years. Genomes of several organisms have been fully sequenced. The next step necessarily involves the analysis of comparative expression levels of various genes and to identify all the possible variations of sequence present in each of the gene or in the noncording regulatory regions obtained from a particular population. Handling such large volumes of data requires techniques which necessitate miniaturization and a massive scale parallelism. Hence the DNA chip comes in to the picture.
Researchers such as those at the University of Alaska Fairbanks' (UAF) Institute of Arctic Biology (IAB) and the Arctic Region Supercomputing Center (ARSC) seek to understand how organisms deal with the demands of their natural environment-as shown by the discovery of many remarkable adaptations that organisms have acquired living in the extremes of Alaska. Many of these adaptations have significant biomedical relevance in areas such as stroke, cardiovascular disease, and physiological stress. Somehow, our wild counterparts have adapted to severe environmental demands over long periods of time. Simultaneous to this research, scientists are also investigating the molecular changes that can be observed in humans as a result of their environment, such as through smoking or exposure to contaminants.
This push in research has resulted in the integration with life science research of approaches from many fields, including engineering, physics, mathematics, and computer science. One of the most well-known results of this is the Human Genome Project. Through this project, researchers * were able to design instruments capable of performing many different types of molecular measurements so that statistically significant and large scale sampling of these molecules could be achieved. Now, biomedical research is producing data that show researchers that things are not always where they expected them to be, while at the same time researchers are at a rapidly expanding phase of discovery and analysis of large, highly repeatable measurements of complex molecular systems.
One of the more important and generally applicable tools that has emerged from this type of research is called DNA micro arrays, or DNA chip technology This technology uses the fundamentals of Watson and Crick base-pairing along with hybridization to customize applications of DNA micro arrays to simultaneously interrogate a large number of genetic loci (those locations on the DNA molecules that have differing biological roles). The result of this type of analysis is that experiments that once tool ten years in thousands of laboratories can now be accomplished with a small number of experiments in just one laboratory.
DNA chips also known as micro arrays are very significant technological development in molecular biology and are perhaps most efficient tool available for functional genomics today. An evident from the name micro array essentially consists of an array of either Oligonucleotides or cDNA fixed on a substrate. There has been an explosion of information in the field of genomics in the last five years. Genomes of several organisms have been fully sequenced. The next step necessarily involves the analysis of comparative expression levels of various genes and to identify all the possible variations of sequence present in each of the gene or in the noncording regulatory regions obtained from a particular population. Handling such large volumes of data requires techniques which necessitate miniaturization and a massive scale parallelism. Hence the DNA chip comes in to the picture.
Researchers such as those at the University of Alaska Fairbanks' (UAF) Institute of Arctic Biology (IAB) and the Arctic Region Supercomputing Center (ARSC) seek to understand how organisms deal with the demands of their natural environment-as shown by the discovery of many remarkable adaptations that organisms have acquired living in the extremes of Alaska. Many of these adaptations have significant biomedical relevance in areas such as stroke, cardiovascular disease, and physiological stress. Somehow, our wild counterparts have adapted to severe environmental demands over long periods of time. Simultaneous to this research, scientists are also investigating the molecular changes that can be observed in humans as a result of their environment, such as through smoking or exposure to contaminants.
This push in research has resulted in the integration with life science research of approaches from many fields, including engineering, physics, mathematics, and computer science. One of the most well-known results of this is the Human Genome Project. Through this project, researchers * were able to design instruments capable of performing many different types of molecular measurements so that statistically significant and large scale sampling of these molecules could be achieved. Now, biomedical research is producing data that show researchers that things are not always where they expected them to be, while at the same time researchers are at a rapidly expanding phase of discovery and analysis of large, highly repeatable measurements of complex molecular systems.
One of the more important and generally applicable tools that has emerged from this type of research is called DNA micro arrays, or DNA chip technology This technology uses the fundamentals of Watson and Crick base-pairing along with hybridization to customize applications of DNA micro arrays to simultaneously interrogate a large number of genetic loci (those locations on the DNA molecules that have differing biological roles). The result of this type of analysis is that experiments that once tool ten years in thousands of laboratories can now be accomplished with a small number of experiments in just one laboratory.
Firewire
Definition
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster .
Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems.
In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative high-performance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited.
TOPOLOGYThe 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the busFireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire:
" A 10-bit bus ID that is used to determine which FireWire bus the data came from " A 6-bit physical ID that identifies which device on the bus sent the data " A 48-bit storage area that is capable of addressing 256 terabytes of information for each node!
The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera.The 1394 protocol supports both asynchronous and isochronous data transfers.
Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers. Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows error-checking and retransmission mechanisms to take place.
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster .
Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems.
In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative high-performance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited.
TOPOLOGYThe 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the busFireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire:
" A 10-bit bus ID that is used to determine which FireWire bus the data came from " A 6-bit physical ID that identifies which device on the bus sent the data " A 48-bit storage area that is capable of addressing 256 terabytes of information for each node!
The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera.The 1394 protocol supports both asynchronous and isochronous data transfers.
Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers. Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows error-checking and retransmission mechanisms to take place.
Biochips
Most of us won’t like the idea of implanting a biochip in our body that identifies us uniquely and can be used to track our location. That would be a major loss of privacy. But there is a flip side to this! Such biochips could help agencies to locate lost children, downed soldiers and wandering Alzheimer’s patients. The human body is the next big target of chipmakers. It won’t be long before biochip implants will come to the rescue of sick, or those who are handicapped in someway. Large amount of money and research has already gone into this area of technology. Anyway, such implants have already experimented with. A few US companies are selling both chips and their detectors. The chips are of size of an uncooked grain of rice, small enough to be injected under the skin using a syringe needle. They respond to a signal from the detector, held just a few feet away, by transmitting an identification number. This number is then compared with the database listings of register pets. Daniel Man, a plastic surgeon in private practice in Florida, holds the patent on a more powerful device: a chip that would enable lost humans to be tracked by satellite. A biochip is a collection of miniaturized test sites (micro arrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to get higher throughput and speed. Typically, a biochip’s surface area is not longer than a fingernail. Like a computer chip that can perform millions of mathematical operation in one second, a biochip can perform thousands of biological operations, such as decoding genes, in a few seconds. A genetic biochip is designed to “freeze” into place the structures of many short strands of DNA (deoxyribonucleic acid), the basic chemical instruction that determines the characteristics of an organism. Effectively, it is used as a kind of “test tube” for real chemical samples. A specifically designed microscope can determine where the sample hybridized with DNA strands in the biochip. Biochips helped to dramatically increase the speed of the identification of the estimated 80,000 genes in human DNA, in the world wide research collaboration known as the Human Genome Project. The microchip is described as a sort of “word search” function that can quickly sequence DNA. In addition to genetic applications, the biochip is being used in toxicological, protein, and biochemical research. Biochips can also be used to rapidly detect chemical agents used in biological warfare so that defensive measures can be taken. Motorola, Hitachi, IBM, Texas Instruments have entered into the biochip business. The biochip implants system consists of two components: a transponder and a reader or scanner. The transponder is the actual biochip implant. The biochip system is radio frequency identification (RFID) system, using low-frequency radio signals to communicate between the biochip and reader. The reading range or activation range, between reader and biochip is small, normally between 2 and 12 inches. The transponder The transponder is the actual biochip implant. It is a passive transponder, meaning it contains no battery or energy of its own. In comparison, an active transponder would provide its own energy source, normally a small battery. Because the passive contains no battery, or nothing to wear out, it has a very long life up to 99 years, and no maintenance. Being passive, it is inactive until the reader activates it by sending it a low-power electrical charge. The reader reads or scans the implanted biochip and receives back data (in this case an identification number) from the biochips. The communication between biochip and reader is via low-frequency radio waves. Since the communication is via very low frequency radio waves it is nit at all harmful to the human body. The biochip-transponder consists of four parts; computer microchip, antenna coil, capacitor and the glass capsule. Computer microchips The microchip stores a unique identification number from 10 to 15 digits long. The storage capacity of the current microchips is limited, capable of storing only a single ID number. AVID (American Veterinary Identification Devices), claims their chips, using a nnn-nnn-nnn format, has the capability of over 70 trillion unique numbers. The unique ID number is “etched” or encoded via a laser onto the surface of the microchip before assembly. Once the number is encoded it is impossible to alter. The microchip also contains the electronic circuitry necessary to transmit the ID number to the “reader”. Antenna Coil This is normally a simple, coil of copper wire around a ferrite or iron core. This tiny, primitive, radio antenna receives and sends signals from the reader or scanner. Tuning Capacitor The capacitor stores the small electrical charge (less than 1/1000 of a watt) sent by the reader or scanner, which activates the transponder. This “activation” allows the transponder to send back the ID number encoded in the computer chip. Because “radio waves” are utilized to communicate between the transponder and reader, the capacitor is tuned to the same frequency as the reader. Glass Capsule The glass capsule “houses” the microchip, antenna coil and capacitor. It is a small capsule, the smallest measuring 11 mm in length and 2 mm in diameter, about the size of an uncooked grain of rice. The capsule is made of biocompatible material such as soda lime glass. After assembly, the capsule is hermetically (air-tight) sealed, so no bodily fluids can touch the electronics inside. Because the glass is very smooth and susceptible to movement, a material such as a polypropylene polymer sheath is attached to one end of the capsule. This sheath provides a compatible surface which the boldly tissue fibers bond or interconnect, resulting in a permanent placement of the biochip. The biochip is inserted into the subject with a hypodermic syringe. Injection is safe and simple, comparable to common vaccines. Anesthesia is not required nor recommended. In dogs and cats, the biochip is usually injected behind the neck between the shoulder blades. The reader The reader consists of an “exciter coil” which creates an electromagnetic field that, via radio signals, provides the necessary energy (less than 1/1000 of a watt) to “excite” or “activate” the implanted biochip. The reader also carries a receiving coil that receives the transmitted code or ID number sent back from the “activated” implanted biochip. This all takes place very fast, in milliseconds. The reader also contains the software and components to decode the received code and display the result in an LCD display. The reader can include a RS-232 port to attach a computer. How it works The reader generates a low-power, electromagnetic field, in this case via radio signals, which “activates” the implanted biochip. This “activation” enables the biochip to send the ID code back to the reader via radio signals. The reader amplifies the received code, converts it to digital format, decodes and displays the ID number on the reader’s LCD display. The reader must normally be between 2 and 12 inches near the biochip to communicate. The reader and biochip can communicate through most materials, except metal. Intelligent Software Agents Definition Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data. Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time. If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals. The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished. Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal. Definition of intelligent software agents: Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as: "A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."
Face Recognition Technology
Definition
Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most non-intrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer. Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software: Distance between eyes
" Width of nose
" Depth of eye sockets
" Cheekbones
" Jaw line
" Chin
These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process. SoftwareFacial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include:
" Fingerprint scan
" Retina scan
" Voice identification
Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are:
Detection - When the system is attached to a video surveillance system, the recognition software searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected.
2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.
Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most non-intrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer. Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software: Distance between eyes
" Width of nose
" Depth of eye sockets
" Cheekbones
" Jaw line
" Chin
These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process. SoftwareFacial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include:
" Fingerprint scan
" Retina scan
" Voice identification
Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are:
Detection - When the system is attached to a video surveillance system, the recognition software searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected.
2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.
Subscribe to:
Posts (Atom)