Thanks

Thanks for visiting this site. This site can provide you the latest seminar topics for any Engineering stream may it be Computer Science, Electronics, Information Technology or Mechanical.

Tuesday, February 24, 2009

Satellite Radio

Definition
We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out.
Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station.Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels.Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio.
The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.

DSP Processor

DefinitionThe best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter.
For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).
In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.
Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .

Jini Technology

DefinitionPart of the original vision for Java, it was put on the back burner while Sun waited for Java to gain widespread acceptance. As the Jini project revved up and more than 30technology partners signed on, it became impossible to keep it under wraps. So Sun cofounder Bill Joy, who helped dream up Jini, leaked the news to the media earlier this month. It was promptly smothered in accolades andhyperbolic prose.
When you plug a new Jini-enabled device into a network, it broadcasts a message to any lookup service on the network saying, in effect, "Here I am. Is anyone else out there?" The lookup service registers the new machine, keeps a record of its attributes and sends a message back to the Jini device, letting it know where to reach the lookup service if it needs help. So when it comes time to print, for example, the device calls the lookup service, finds what it needs and sends the job to the appropriate machine. Jini actually consists of a very small piece of Java code that runs on your computer or device.
Jini lets you dynamically move code, and not just data, from one machine to another. That means you can send a Java program to any other Jini machine and run it there, harnessing the power of any machine on your network to complete a task or run a program So far, Jini seems to offer little more than basic network services. Don't expect it to turn your household devices into supercomputers; it will take some ingenious engineering before your stereo will start dating your laptop. Jini can run on small handheld devices with little or no processing power, but these devices need to be network-enabled and need to be controlled by another Jini-enabled hardware or software piece by proxyThe first customer shipment is slated for the fall. Jini-enabled software could ship by the end of the year, and the first Jini-enabled devices could be in stores by next year.
Security. Jini will use the same security andauthentication measures as Java. Unfortunately, Java's security model hasnot been introduced yet
Microsoft.. Without Jini, Java is just a language that can run on any platform. With it, Java becomes a networked system with many of the same capabilities as a network operating system, like Windows NT. Don't expect Microsoft to support Jini.
Lucent's Inferno, a lightweight OS for connecting devices; Microsoft's Millennium, a Windows distributed computing model; and Hewlett-Packard's JetSend, a protocol that lets peripheral devices talkSun Microsystems has a dream: The future of computing will not center around the personal computer, but around the network itself. Any network will do -- your office Ethernet grid, your home-office local area network, the Internet; it doesn't matter.
Sun has carried this banner for years, and essentially cemented its network-centric computing model with the invention of the Java programming language. This week in San Francisco, Sun -- with 37big-name partners -- unveiled Jini, its latest and most ambitious initiative yet. A programming platform and connection technology, Jini is designed to allow painless immediate networking of any and all compliant electronic devices, be they personal digital assistants, cell phones, dishwashers, printers, and so on. Partnering companies include hardware and software vendors, and marquee consumer electronics players like Sony.

Dual Core Processor

Definition
Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.
The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.
To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.
What is a dual core processor?A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.
In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.
An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.

Sensors on 3D Digitization

Definition
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].
Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.
AUTOSYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene

MIMO Wireless Channels: Capacity and Performance Prediction

Multiple-input multiple-output (MIMO) communication techniques make use of multi-element antenna arrays at both the TX and the RX side of a radio link and have been shown theoretically to drastically improve the capacity over more traditional single-input multiple output (SIMO) systems [2, 3, 5, 7]. SIMO channels in wireless networks can provide diversity gain, array gain, and interference canceling gain among other benets. In addition to these same advantages, MIMO links can offer a multiplexing gain by opening Nmin parallel spatial channels, where Nmin is the minimum of the number of TX and RX antennas. Under certain propagation conditions capacity gains proportional to Nmin can be achieved [8]. Space-time coding [14] and spatial multiplexing [1, 2, 7, 16] (a.k.a. BLAST ) are popular signal processing techniques making use of MIMO channels to improve the performance of wireless networks. Previous work and open problems. The literature on realistic MIMO channel models is still scarce. For the line-of-sight (LOS) case, previous work includes . In the fading case, previous studies have mostly been conned to i.i.d. Gaussian matrices, an idealistic assumptions in which the entries of channel matrix are independent complex Gaussian random variables [2, 6, 8]. The influence of spatial fading correlation on either the TX or the RX side of a wireless MIMO radio link has been addressed in [3, 15]. In practice, however, the realization of high MIMO capacity is sensitive not only to the fading correlation between individual antennas but also to the rank behavior of the channel. In the existing literature, high rank behavior has been loosely linked to the existence of a dense scattering environment. Recent successful demonstrations of MIMO technologies in indoor-to-indoor channels, where rich scattering is almost always guaranteed.
Definition:
MIMO is a technique for boosting wireless bandwidth and range by taking advantage of multiplexing.MIMO algorithms in a radio chipset send information out over two or more antennas. The radio signals reflect off objects, creating multiple paths that in conventional radios cause interference and fading. But MIMO uses these paths to carry more information, which is recombined on the receiving side by the MIMO algorithms.A conventional radio uses one antenna to transmit a DataStream. A typical smart antenna radio, on the other hand, uses multiple antennas. This design helps combat distortion and interference. Examples of multiple-antenna techniques include switched antenna diversity selection, radio-frequency beam forming, digital beam forming and adaptive diversity combining. These smart antenna techniques are one-dimensional, whereas MIMO is multi-dimensional. It builds on one-dimensional smart antenna technology by simultaneously transmitting multiple data streams through the same channel, which increases wireless capacity.

Unlicensed Mobile Access

Definition
During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.
However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:o UMTSo Soft MSCso IMS Data Serviceso I-WLANo IMS TelephonyThis whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "TheUMA Opportunity," available on the Kineto web site (www.kineto.com).
Mobile Network Reference Model
To best understand the role UMA plays in mobile network evolution, it is helpful to firstintroduce a reference model for today's mobile networks. Figure 1 provides a simplifiedmodel for the majority of 3GPP-based mobile networks currently in deployment. Basedon Release 99, they typically consist of the following:
o GSM/GPRS/EDGE Radio Access Network (GERAN): In mature mobile markets, theGERAN typically provides good cellular coverage throughout an operator's serviceterritory and is optimized for the delivery of high-quality circuit-based voice services.While capable of delivering mobile data (packet) services, GERAN data throughput istypically under 80Kbps and network usage cost is high.
o Circuit Core/Services: The core circuit network provides the services responsible for the vast majority of mobile revenues today. The circuit core consists of legacy Serving and Gateway Mobile Switching Centers (MSCs) providing mainstream mobile telephony services as well as a number of systems supporting the delivery of other circuit-based services including SMS, voice mail and ring tones.
o Packet Core/Services: The core packet network is responsible for providing mobile data services. The packet core consists of GPRS infrastructure (SGSNs and GGSNs) as well as a number of systems supporting the delivery of packet-based services including WAP and MMS.
Introducing UMA into Mobile Networks
For mobile and integrated operators, adding UMA to existing networks is not a major undertaking. UMA essentially defines a new radio access network (RAN), the UMA access network. Like GSM/GPRS/EDGE (GERAN) and UMTS (UTRAN) RANs, a UMA access network (UMAN) leverages well-defined, standard interfaces into an operator's existing circuit and packet core networks for service delivery. However, unlike GSM or UMTS RANs, which utilize expensive private backhaul circuits as well as costly base stations and licensed spectrum for wireless coverage, a UMAN enables operators to leverage their subscribers' existing broadband access connections for backhaul as well as inexpensive WLAN access points and unlicensed spectrum for wireless coverage.

Amorphous Computing and Swarm

Introduction
Amorphous computing consists of a multitude of interacting computers with modest computing power and memory, and modules for intercommunication. These collections of devices are known as swarms. The desired coherent global behaviour of the computer is achieved from the local interactions between the individual agents. The global behaviour of these vast numbers of unreliable agents is resilient to a small fraction of misbehaving agents and noisy and intimidating environment. This makes them highly useful for sensor networks, MEMS, internet nodes, etc. Presently, of the 8 billion computational units existing worldwide, only 2% of them are stand-alone computers. This proportion is projected to further decrease with the paradigm shift to the biologically inspired amorphous computing model. An insight into amorphous and swarm computing will be given in this paper.
The ideas for amorphous computing have been derived from swarm behaviour of social organisms like the ants, bees and bacteria. Recently, biologists and computer scientists studying artificial life have modelled biological swarms to understand how such social animals interact, achieve goals and evolve. A certain level of intelligence,exceeding those of the individual agents, results from the swarm behaviour. Amorphous Computing is a established with a collection of computing particles -with modest memory and computing power- spread out over a geographical space and running identical programs. Swarm Intelligence may be derived from the randomness, repulsion and unpredictability of the agents, thereby resulting in diverse solutions to the problem. There are no known criteria to evaluate swarm intelligence performance.
Inspiration
The development of swarm computing has been instilled by some of the natural phenomenon.The most complex of the activities, like optimal path finding, have been executed by simple organisms. Lately MEMS research has paved the way for manufacturing the swarm agents with low costs and high efficiency.
The biological world
In case of the ant colonies, the worker ants have decentralised control and a robust mechanism for some of the complex activities like foraging, finding the shortest path to food source and back home, build and protect nests and finding the richest food source in the locality. The ants communicate by using pheromones. Trails of pheromone are laid down by a given ant, which can be followed by other ants. Depending on the species, ants lay trails travelling from the nest, to the nest or possibly in both directions. Pheromones evaporate over time. Pheromones also accumulate with multiple ants using the same path. As the ants forage, the optimal path to food is likely to have the highest deposition of pheromones, as more number of ants follow this path and deposit pheromones. The longer paths are less likely to be travelled and therefore have only a smaller concentration of pheromones. With time, most of the ants follow the optimal path. When the food sources deplete, the pheromones evaporate and new trails can be discovered. This optimal path finding approach has a highly dynamic and robust nature.
Similar organization and behaviour are also present in the flocks of bird. For a bird to participate in a flock, it only adjusts its movements to coordinate with the movements of its flock mates, typically its neighbours that are close to it in the flock. A bird in a flock simply tries to stay close to its neighbours, but avoid collisions with them. Each bird does not take commands from any leader bird since there is no lead bird. Any bird can °y in the front, center and back of the swarm. Swarm behaviour helps birds take advantage of several things including protection from predators (especially for birds in the middle of the flock), and searching for food (essentially each bird is exploiting the eyes of every other bird). Even complex biological entities like brain are a swarm of interacting simple agents like the neurons. Each neuron does not have the holistic picture, but processes simple elements through its interaction with few other neurons and paves way for the thinking process.

AJAX-A new Approach To Web Application

Introduction
Web application designing has by far evolved in a number of ways since the time of its birth. To make web pages more interactive various techniques have been devised both at the browser level and at the server level. The introduction of XMLHttpRequest class in the Internet Explorer 5 by Microsoft paved the way for interacting with the server using JavaScript, asynchronously. AJAX, a shorthand for Asynchronous Java And XML, is a technique which uses this MLHttpRequest object of the browser features plus the Document Object Model and DHTML and provides for making highly interactive web applications in which the entire web page need not be changed by a user action, only parts of the page are loaded dynamically by exchanging information with the server. This approach has been able to enhance the interactivity and speed of the web applications to a great extent. Interactive applications such as Google Maps, Orkut, Instant Messengers are making extensive use of this technique. This report presents an overview of the basic concepts of AJAX and how it is used in making web applications.Creating Web applications has been considered as one of the most exciting jobs under current interaction design. But, Web interaction designers can't help feel a little envious of their colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web's rapid proliferation also creates a gap between the experiences that can be provided through web applications and the experiences users can get from a desktop application.In the earliest days of the Web, designers chafed against the constraints of the medium. The entire interaction model of the Web was rooted in its heritage as a hypertext system: click the link, request the document, wait for the server to respond. Designers could not think of changing the basic foundation of the web that is, the call-response model, to improve on the web applications because of the various caveats, restrictions and compatibility issues associated with it.But the urge to enhance the responsiveness of the web applications, made the designers take up the task of making the Web work the best it could within the hypertext interaction model, developing new conventions for Web interaction that allowed their applications to reach audiences who never would have attempted to use desktop applications designed for the same tasks. The designers' came up with a technique called AJAX, shorthand for Asynchronous Java And XML, which is a web development technique for creating interactive web applications. The intent of this is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page's interactivity, speed, and usability. AJAX is not a single new technology of its own but is a bunch of several technologies, each ourishing in its own right, coming together in powerful new ways.What is AJAX?
AJAX is a set of technologies combined in an efficient manner so that the web application runs in a better way utilizing the benefits of all these simultaneously. AJAX incorporates:1. standards-based presentation using XHTML and CSS;2. dynamic display and interaction using the Document Object Model;3. data interchange and manipulation using XML and XSLT;4. asynchronous data retrieval using XMLHttpRequest;5. and JavaScript binding everything together.

Pivot VectorSpace Approach in Audio-Video Mixing

Definition
The PIVOT VECTOR SPACE APPROACH is a novel technique of audio-video mixing which automatically selects the best audio clip from the available database, to be mixed with the given video shot. Till the development of this technique, audio-video mixing is a process that could be done only by professional audio-mixing artists. However employing these artists is very expensive and is not feasible for home video mixing. Besides, the process is time-consuming and tedious.
In today's era, significant advances are happening constantly in the field of Information Technology. The development in the IT related fields such as multimedia is extremely vast. This is evident with the release of a variety of multimedia products such as mobile handsets, portable MP3 players, digital video camcorders, handicams etc. Hence, certain activities such as production of home videos is easy due to products such as handicams, digital video camcorders etc. Such a scenario was not there a decade ago ,since no such products were available in the market. As a result production of home videos is not possible since it was reserved completely for professional video artists.
So in today's world, a large amount of home videos are being made and the number of amateur and home video enthusiasts is very large.A home video artist can never match the aesthetic capabilities of a professional audio mixing artist. However employing a professional mixing artist to develop home video is not feasible as it is expensive, tedious and time consuming.
The PIVOT VECTOR SPACE APPROACH is a technique that all amateur and home video enthusiasts can use in the creation of video footage that gives a professional look and feel. This technique saves cost and is fast. Since it is fully automatic, the user need not worry about his aesthetic capabilities. The PIVOT VECTOR SPACE APPROACH uses a pivot vector space mixing framework to incorporate the artistic heuristics for mixing audio with video .These artistic heuristics use high level perceptual descriptors of audio and video characteristics. Low-level signal processing techniques compute these descriptors. Video Aesthetic Features
The table shows, from the cinematic point of view,a set of attributed features(such as color and motion) required to describe videos.The computations for extracting aesthetic attributed features from low-level video features occur at the video shot granularity. Because some attributed features are based on still images(such as high light falloff),we compute them on the key frame of a video shot. We try to optimize the trade-off in accuracy and computational efficiency among the competing extraction methods. Also, even though we assume that the videos considered come in the MPEG format(widely used by several home video camcorders),the features exist independently of a particular representation format.

Alternative Models of Computing

Introduction
The seminar aims at introducing various other forms of computation methods. Concepts of quantum computing ,DNA computing have been introduced and discussed . Particular algorithms (like the Shor's algorithm) have been discussed. Solution of Traveling alesman problem using DNA computing has also been discussed . In ¯ne,the seminar aims opening windows to topics that may become tomorrow's mainstay in computer science.
Richard Feynman thought up the idea of a 'quantum computer', a computer that uses the e®ects of quantum mechanics to its advantage .Initially, the idea of a 'quantum computer' was primarily of theoretical interest only, but recent developments have bought the idea to foreground. To start with, was the invention of an algorithm to factor large numbers on a quantum computer, by Peter Shor ,from Bell labs . By using this algorithm, a quantum computer would be able to crack codes much more quickly than any ordinary (or classical) computer could.In fact a quantum computer capable of performing Shor's algorithm would be able to break current cryptography techniques(like the RSA) in a matter of seconds. With the motivation provided by this algorithm, the quantum computing has gathered momentum and is a hot topic for research around the globe. Leonard M. Adleman solved an unremarkable computational problem with an exceptional technique. He had used 'mapping' to solve TSP. It was a problem that an average desktop machine could solve in fraction of a second. Adleman, however took , seven days to find a solution. Even then his work was exceptional, because he solved the problem with DNA. It was a breakthroughand a landmark demonstration of computing on the molecular level.
In case of quantum computing and DNA computing ,both have two aspects.Firstly building a computer and secondly deploying the computer for solving problems that are tough to solve in the present domain of Von Neumann architecture. In the seminar we would consider the later.
Shor's Algorithm
Shor's algorithm is based on a result from number theory. Which states : The functionf(a) = x pow a mod nis a periodic function, where x and n are coprime . In the context of Shor's algorithm n is the number we wish to factor. By saying we mean that their greatest common divisor is one.If implemented, it will have a profound e®ect on cryptography, as it would compromise the security provided by public key encryption (such as RSA).We all know that the security lies in the 'hard' factoring problem. Shor's algorithm makes it simple using quantum computing techniques.