Thanks

Thanks for visiting this site. This site can provide you the latest seminar topics for any Engineering stream may it be Computer Science, Electronics, Information Technology or Mechanical.

Wednesday, December 1, 2010

Adaptive Active Phased Array Radars

Adaptive active phased array radars are seen as the vehicle to address the current requirements for true ‘multifunction’ radars systems.  Their ability to adapt to the enviournment and schedule their tasks in real time allows them to operate with performance levels well above those that can be achieved from the conventional radars. 
Their ability to make effective use of all the available RF power and to minimize RF losses also makes them a good candidate for future very long range radars. The AAPAR can provide many benefit in meeting the performance that will be required by tommorow's radar systems. In some cases it will be the only possible solution.
It provides the radar system designer with an almost infinte range of possibilites. This flexibility, however, needs to be treated with caution: the complexity of the system must not be allowed to grow such that it becomes uncontolled and unstable. The AAPAR breaks down the conventional walls between the traditional systems elements- antenna, transmitter, receiver etc-such that the AAPAR design must be treated holistically.
Strict requirements on the integrity of the system must be enforced. Rigourous techiues must be used to ensure that the overall flow down of requirements from top level is achieved and that testeability of the requirements can be demonstrated under both quiescent and adaptive condition.

Tuesday, February 24, 2009

Satellite Radio

Definition
We all have our favorite radio stations that we preset into our car radios, flipping between them as we drive to and from work, on errands and around town. But when travel too far away from the source station, the signal breaks up and fades into static. Most radio signals can only travel about 30 or 40 miles from their source. On long trips that find you passing through different cities, you might have to change radio stations every hour or so as the signals fade in and out.
Now, imagine a radio station that can broadcast its signal from more than 22,000 miles (35,000 kill) away and then come through on your car radio with complete clarity without ever having to change the radio station.Satellite Radio or Digital Audio Radio Service (DARS) is a subscriber based radio service that is broadcast directly from satellites. Subscribers will be able to receive up to100 radio channels featuring Compact Disk digital quality music, news, weather, sports. talk radio and other entertainment channels.Satellite radio is an idea nearly 10 years in the making. In 1992, the U.S. Federal Communications Commission (FCC) allocated a spectrum in the "S" band (2.3 GHz) for nationwide broadcasting of satellite-based Digital Audio Radio Service (DARS).. In 1997. the FCC awarded 8-year radio broadcast licenses to two companies, Sirius Satellite Radio former (CD Radio) and XM Satellite Radio (former American Mobile Radio). Both companies have been working aggressively to be prepared to offer their radio services to the public by the end of 2000. It is expected that automotive radios would be the largest application of Satellite Radio.
The satellite era began in September 2001 when XM launched in selected markets. followed by full nationwide service in November. Sirius lagged slightly, with a gradual rollout beginning _n February, including a quiet launch in the Bay Area on June 15. The nationwide launch comes July 1.

DSP Processor

DefinitionThe best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter.
For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).
In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.
Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .

Jini Technology

DefinitionPart of the original vision for Java, it was put on the back burner while Sun waited for Java to gain widespread acceptance. As the Jini project revved up and more than 30technology partners signed on, it became impossible to keep it under wraps. So Sun cofounder Bill Joy, who helped dream up Jini, leaked the news to the media earlier this month. It was promptly smothered in accolades andhyperbolic prose.
When you plug a new Jini-enabled device into a network, it broadcasts a message to any lookup service on the network saying, in effect, "Here I am. Is anyone else out there?" The lookup service registers the new machine, keeps a record of its attributes and sends a message back to the Jini device, letting it know where to reach the lookup service if it needs help. So when it comes time to print, for example, the device calls the lookup service, finds what it needs and sends the job to the appropriate machine. Jini actually consists of a very small piece of Java code that runs on your computer or device.
Jini lets you dynamically move code, and not just data, from one machine to another. That means you can send a Java program to any other Jini machine and run it there, harnessing the power of any machine on your network to complete a task or run a program So far, Jini seems to offer little more than basic network services. Don't expect it to turn your household devices into supercomputers; it will take some ingenious engineering before your stereo will start dating your laptop. Jini can run on small handheld devices with little or no processing power, but these devices need to be network-enabled and need to be controlled by another Jini-enabled hardware or software piece by proxyThe first customer shipment is slated for the fall. Jini-enabled software could ship by the end of the year, and the first Jini-enabled devices could be in stores by next year.
Security. Jini will use the same security andauthentication measures as Java. Unfortunately, Java's security model hasnot been introduced yet
Microsoft.. Without Jini, Java is just a language that can run on any platform. With it, Java becomes a networked system with many of the same capabilities as a network operating system, like Windows NT. Don't expect Microsoft to support Jini.
Lucent's Inferno, a lightweight OS for connecting devices; Microsoft's Millennium, a Windows distributed computing model; and Hewlett-Packard's JetSend, a protocol that lets peripheral devices talkSun Microsystems has a dream: The future of computing will not center around the personal computer, but around the network itself. Any network will do -- your office Ethernet grid, your home-office local area network, the Internet; it doesn't matter.
Sun has carried this banner for years, and essentially cemented its network-centric computing model with the invention of the Java programming language. This week in San Francisco, Sun -- with 37big-name partners -- unveiled Jini, its latest and most ambitious initiative yet. A programming platform and connection technology, Jini is designed to allow painless immediate networking of any and all compliant electronic devices, be they personal digital assistants, cell phones, dishwashers, printers, and so on. Partnering companies include hardware and software vendors, and marquee consumer electronics players like Sony.

Dual Core Processor

Definition
Seeing the technical difficulties in cranking higher clock speed out of the present single core processors, dual core architecture has started to establish itself as the answer to the development of future processors. With the release of AMD dual core opteron and Intel Pentium Extreme edition 840, the month of April 2005 officially marks the beginning of dual core endeavors for both companies.
The transition from a single core to dual core architecture was triggered by a couple of factors. According to Moore's Law, the number of transistors (complexity) on a microprocessor doubles approximately every 18 months. The latest 2 MB Prescott core possesses more than 160 million transistors; breaking the 200 million mark is just a matter of time. Transistor count is one of the reasons that drive the industry toward the dual core architecture. Instead of using the available astronomically high transistor counts to design a new, more complex single core processor that would offer higher performance than the present offerings, chip makers have decided to put these transistors to use in producing two identical yet independent cores and combining them in to a single package.
To them, this is actually a far better use of the available transistors, and in return should give the consumers more value for their money. Besides, with the single core's thermal envelope being pushed to its limit and severe current leakage issues that have hit the silicon manufacturing industry ever since the transition to 90 nm chip fabrication, it's extremely difficult for chip makers (particulary Intel) to squeeze more clock speed out of the present single core design. Pushing for higher clock speeds is not a feasible option at present because of transistor current leakage. And adding more features into the core will increase the complexity of the design and make it harder to manage. These are the factors that have made the dual core option the more viable alternative in making full use of the amount of transistors available.
What is a dual core processor?A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one. In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.
In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threadi0ng technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.
An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.

Sensors on 3D Digitization

Definition
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].
Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.
AUTOSYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene

MIMO Wireless Channels: Capacity and Performance Prediction

Multiple-input multiple-output (MIMO) communication techniques make use of multi-element antenna arrays at both the TX and the RX side of a radio link and have been shown theoretically to drastically improve the capacity over more traditional single-input multiple output (SIMO) systems [2, 3, 5, 7]. SIMO channels in wireless networks can provide diversity gain, array gain, and interference canceling gain among other benets. In addition to these same advantages, MIMO links can offer a multiplexing gain by opening Nmin parallel spatial channels, where Nmin is the minimum of the number of TX and RX antennas. Under certain propagation conditions capacity gains proportional to Nmin can be achieved [8]. Space-time coding [14] and spatial multiplexing [1, 2, 7, 16] (a.k.a. BLAST ) are popular signal processing techniques making use of MIMO channels to improve the performance of wireless networks. Previous work and open problems. The literature on realistic MIMO channel models is still scarce. For the line-of-sight (LOS) case, previous work includes . In the fading case, previous studies have mostly been conned to i.i.d. Gaussian matrices, an idealistic assumptions in which the entries of channel matrix are independent complex Gaussian random variables [2, 6, 8]. The influence of spatial fading correlation on either the TX or the RX side of a wireless MIMO radio link has been addressed in [3, 15]. In practice, however, the realization of high MIMO capacity is sensitive not only to the fading correlation between individual antennas but also to the rank behavior of the channel. In the existing literature, high rank behavior has been loosely linked to the existence of a dense scattering environment. Recent successful demonstrations of MIMO technologies in indoor-to-indoor channels, where rich scattering is almost always guaranteed.
Definition:
MIMO is a technique for boosting wireless bandwidth and range by taking advantage of multiplexing.MIMO algorithms in a radio chipset send information out over two or more antennas. The radio signals reflect off objects, creating multiple paths that in conventional radios cause interference and fading. But MIMO uses these paths to carry more information, which is recombined on the receiving side by the MIMO algorithms.A conventional radio uses one antenna to transmit a DataStream. A typical smart antenna radio, on the other hand, uses multiple antennas. This design helps combat distortion and interference. Examples of multiple-antenna techniques include switched antenna diversity selection, radio-frequency beam forming, digital beam forming and adaptive diversity combining. These smart antenna techniques are one-dimensional, whereas MIMO is multi-dimensional. It builds on one-dimensional smart antenna technology by simultaneously transmitting multiple data streams through the same channel, which increases wireless capacity.