STORAGE AREA NETWORK (SAN):
SAN is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers in such a way that the devices appear as locally attached to the operating system. Its a network whose primary purpose is the transfer of data between computer systems and storage elements. A san consists of a communication structure , which provides physical connections; and a manager layer, which organises the connections, storage elements , and computer systems so that the data transfer is secure.
A storage area network (SAN) is a high-speed special-purpose network (or subnetwork) that interconnects different kinds of data storage devices with associated data server on behalf of a larger network of users. Typically, a storage area network is part of the overall network of computing resources for an enterprise. A storage area network is usually clustered in close proximity to other computing resources such as IBM z990 mainframe but may also extend to remote locations for backup and archival storage, using wide area network carrier technologies such as ATM or SONET.
A san can also be a storage system consists of storage elements, storage devices, computer systems, or appliances, plus all controll softwares, communicating over a network.
A san is a high speed network attaching high speed servers and storage devices. A san always any-to-any connections across the network, using interconnected elements such as routers, gateways, hubs, switches, and directors. a san can be shared between servers or dedicated to one server. It can be local or can be extended over a geographical distances.
San create new methods of data storage to the servers(both in availability and performance).
San facillitates direct, high –speed data transfers between servers and storsge devices in three ways-
1.Server to storage – this is the model of interaction with the storage devices.
2.server to server- used for high volume communication between seervers.
3.storage to storage-this movement capability enables data to be moved without server intervention.
Historically, data centers first created "islands" of SCSI DISK ARRAYS as Direct Attached Servers (DAS), each dedicated to an application, and visible as a number of "virtual hard drives" (i.e. LUNs). Essentially, a SAN consolidates such storage islands together using a high-speed network.
Operating systems maintain their own file systems on them on dedicated, non-shared LUNS, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on between computers within a LUN requires advanced solutions.
Despite such issues, SANs help to increase storage capacity utilization, since multiple servers consolidate their private storage space onto the disk arrays
Sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to shift storage from one server to another.
Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the lun of the faulty server. This process can take as little as half an hour and is a relatively new idea being pioneered in newer data centres.
SAN infrastructure
SAN-switch with optical fibre channel connectors installed.
SANs often utilize a Fibre Channel fabric topology - an infrastructure specially designed to handle storage communications. It provides faster and more reliable access. A fabric is similar in concept to a network segment in a local area network. A typical Fibre Channel SAN fabric is made up of a number of Fibre Channel switches.
Today, all major SAN equipment vendors also offer some form of Fibre routing solution, and these bring substantial scalability benefits to the SAN architecture by allowing data to cross between different fabrics without merging the data.
Compatibility
One of the early problems with Fibre Channel SANs was that the switches and other hardware from different manufacturers were not entirely compatible. Although the basic storage protocols FCP were always quite standard, some of the higher-level functions did not interoperate well. Similarly, many host operating systems would react badly to other operating systems sharing the same fabric. Many solutions were pushed to the market before standards were finalized and vendors have since innovated around the standard.
SANs at home
A SAN, being a network of large disk arrays, is primarily used in large scale, high performance enterprise storage operations. SAN equipment is relatively expensive and so fibre channel host bus adapters are rare in desktop computers. The iSCSI SAN technology is expected to eventually produce cheap SANs, but it is unlikely that this technology will be used outside the enterprise data center environment.
SANs in media and entertainment
Video editing workgroups require very high data transfer rates. Outside of the enterprise market, this is one area that greatly benefits from SANs.
Per-node bandwidth usage control, sometimes referred to as Quality of Service (QoS), is especially important in video workgroups as it ensures fair and prioritized bandwidth usage across the network if there is insufficient open bandwidth available. Avid Unity, Apple's Xsan and Tiger Technology MetaSAN are specifically designed for video networks and offer this functionality.
References
"TechEncyclopedia:IPStorage". http://www.techweb.com/encyclopedia/defineterm.jhtml?term=IPstorage.
Retrieved 2007-12-09.
"TechEncyclopedia:SANoIP". http://www.techweb.com/encyclopedia/defineterm.jhtml?term=SANoIP. Retrieved 2007-12-09.
Wikepedia.
IBM storage area network article.
Monday, December 14, 2009
Podcasting
Podcasting:
A podcast is a series of digital media files (either audio or video) that are released episodically and downloaded through web syndication. The mode of delivery is what differentiates podcasts from other ways of accessing media files over the Internet, such as simple download or streamed webcasts: special client software applications known as podcatchers (like iTunes, Zune, Juice, and Winamp) are used to automatically identify and download new files in the series when they are released by accessing a centrally-maintained web feed that lists all files associated with the series. New files can thus be downloaded automatically by the podcatcher and stored locally on the user's computer or other device for offline use, giving simpler access to episodic content.
Most dictionary definitions of a podcast fall into one of two camps as of September 2009. One set focuses on the "on-demand" nature of podcasts. Another set requires the automatic or syndication posting. There are problems with both definitions. The first is too open. Under such a definition, a paid music download could technically be a podcast. Most audiences would disagree. The second is too limiting. It does not allow for manual downloads. Researchers at the Center for Journalism and Mass Communication Research at the University of Texas at Austin are proposing a three-part definition of a podcast: first, that it is episodic; second, that it is downloadable; and third, that it is program-driven, mainly with a host and/or theme.
Like the term broadcast, podcast can refer either to an ongoing series or episodes of a particular program.A podcaster is the person who creates the content.
Podcasting:
What It Means
2005 word of the year – New Oxford American Dictionary
A podcast is a media file that is distributed by subscription (paid or unpaid) over the Internet using syndication feeds, for playback on mobile devices and personal computers.
Podcasting may auto-update an iPod
Podcasting does NOT require an iPod!
Podcasts were first called
Define podcasting:
media file that is distributed by subscription (paid or unpaid) over the Internet using syndication feeds, for playback on mobile devices and personal computers.
Name:
The term was mentioned by Ben Hammersley in The Guardian newspaper in a February 2004 article, along with other proposed names for the new medium. It is a portmanteau of the words "iPod" and "broadcast", the Apple iPod being the brand of portable media player for which early podcasting scripts were developed (see history of podcasting), allowing podcasts to be automatically transferred from a personal computer to a mobile device after download.
It has never been necessary, despite the source of the name, to use an iPod or any other form of portable media player to use podcasts; the content can be accessed using any computer capable of playing media files. A backronym has been posited where podcast stands for "Personal On Demand broadCAST".
History:
Podcasting began to catch hold with the public in late 2004, though during the 1998–2001 dot-com era there were multiple "podcasts" done by major companies, such as Real Networks and ESPN.com. Many individuals and groups contributed to the emergence and popularity of podcasts.
The most common audio file format used is MP3.
“Audio Blogs”
Web logs (blogs) are web pages that are easily updated – example
Text, comments, exchanges possible
These web pages with text accompany most podcasts – allow for information exchange
Engages the audience
Process
Make recording, gather visuals
Assemble into a multi-media presentation
Post to the Web – most often with a description at a blog
Generate RSS file
Syndication
Clients receive notice via RSS
Their “podcatcher” automatically downloads to computer and send to iPod (if used)
Podcasts for Learning
Have been used in higher education for three years! Originally called “audio blogs”
iTunes University
Sample Accounting Podcasts
EPN Education Podcast Network
Accounting Best Practices – Bragg & Nach
Ernst & Young Podcast Channels
CPA Podcasts
Podcasting News - Business
Creating Podcasts
In simplest audio form (mp3), podcasts can be created using free software.
Audacity, Media Player, Feedburner, and other online tools
Enhanced podcasts mp4a format
Graphics, text, video
ProfCast – drag and drop visuals – chapters
OS-X platform required (Mac not PC)
Trademarks
2005
The logo used by Apple to represent Podcasting
On February 5, 2005, Shae Spencer Management LLC of Fairport, New York filed a trademark application to register PODCAST for an "online prerecorded radio program over the internet".On September 9, 2005, the United States Patent and Trademark Office rejected the application, citing Wikipedia's podcast entry as describing the history of the term.
As of September 20, 2005, known trademarks that attempted to capitalize on podcast include: Podcast Realty, GuidePod, PodGizmo, Pod-Casting, MyPod, Podvertiser, Podango, ePodcast, PodCabin, Podcaster, PodShop, PodKitchen, Podgram, GodPod and Podcast.
2006
On September 26, 2006, it was reported that Apple Computer started to crack down on businesses using the acronym "POD", in product and company names. Apple sent a cease-and-desist order that week to Podcast Ready, Inc., which markets an application known as "myPodder". Lawyers for Apple contended allegedly that the term "pod" has been used by the public to refer to Apple's music player so extensively that it falls under Apple's trademark cover. It was speculated that such activity was part of a bigger campaign for Apple to expand the scope of its existing iPod trademark, which included trademarking "IPODCAST", "IPOD", and "POD". On November 16, 2006, Apple Trademark Department returned a letter claiming Apple does not object to third party usage of "podcast" to refer to podcasting services and that Apple does not license the term(s).
2007
As of February 2007, there were 24 attempts to register trademarks containing the word "PODCAST" in United States, but only "PODCAST READY" from Podcast Ready, Inc. was approved.
A podcast is a series of digital media files (either audio or video) that are released episodically and downloaded through web syndication. The mode of delivery is what differentiates podcasts from other ways of accessing media files over the Internet, such as simple download or streamed webcasts: special client software applications known as podcatchers (like iTunes, Zune, Juice, and Winamp) are used to automatically identify and download new files in the series when they are released by accessing a centrally-maintained web feed that lists all files associated with the series. New files can thus be downloaded automatically by the podcatcher and stored locally on the user's computer or other device for offline use, giving simpler access to episodic content.
Most dictionary definitions of a podcast fall into one of two camps as of September 2009. One set focuses on the "on-demand" nature of podcasts. Another set requires the automatic or syndication posting. There are problems with both definitions. The first is too open. Under such a definition, a paid music download could technically be a podcast. Most audiences would disagree. The second is too limiting. It does not allow for manual downloads. Researchers at the Center for Journalism and Mass Communication Research at the University of Texas at Austin are proposing a three-part definition of a podcast: first, that it is episodic; second, that it is downloadable; and third, that it is program-driven, mainly with a host and/or theme.
Like the term broadcast, podcast can refer either to an ongoing series or episodes of a particular program.A podcaster is the person who creates the content.
Podcasting:
What It Means
2005 word of the year – New Oxford American Dictionary
A podcast is a media file that is distributed by subscription (paid or unpaid) over the Internet using syndication feeds, for playback on mobile devices and personal computers.
Podcasting may auto-update an iPod
Podcasting does NOT require an iPod!
Podcasts were first called
Define podcasting:
media file that is distributed by subscription (paid or unpaid) over the Internet using syndication feeds, for playback on mobile devices and personal computers.
Name:
The term was mentioned by Ben Hammersley in The Guardian newspaper in a February 2004 article, along with other proposed names for the new medium. It is a portmanteau of the words "iPod" and "broadcast", the Apple iPod being the brand of portable media player for which early podcasting scripts were developed (see history of podcasting), allowing podcasts to be automatically transferred from a personal computer to a mobile device after download.
It has never been necessary, despite the source of the name, to use an iPod or any other form of portable media player to use podcasts; the content can be accessed using any computer capable of playing media files. A backronym has been posited where podcast stands for "Personal On Demand broadCAST".
History:
Podcasting began to catch hold with the public in late 2004, though during the 1998–2001 dot-com era there were multiple "podcasts" done by major companies, such as Real Networks and ESPN.com. Many individuals and groups contributed to the emergence and popularity of podcasts.
The most common audio file format used is MP3.
“Audio Blogs”
Web logs (blogs) are web pages that are easily updated – example
Text, comments, exchanges possible
These web pages with text accompany most podcasts – allow for information exchange
Engages the audience
Process
Make recording, gather visuals
Assemble into a multi-media presentation
Post to the Web – most often with a description at a blog
Generate RSS file
Syndication
Clients receive notice via RSS
Their “podcatcher” automatically downloads to computer and send to iPod (if used)
Podcasts for Learning
Have been used in higher education for three years! Originally called “audio blogs”
iTunes University
Sample Accounting Podcasts
EPN Education Podcast Network
Accounting Best Practices – Bragg & Nach
Ernst & Young Podcast Channels
CPA Podcasts
Podcasting News - Business
Creating Podcasts
In simplest audio form (mp3), podcasts can be created using free software.
Audacity, Media Player, Feedburner, and other online tools
Enhanced podcasts mp4a format
Graphics, text, video
ProfCast – drag and drop visuals – chapters
OS-X platform required (Mac not PC)
Trademarks
2005
The logo used by Apple to represent Podcasting
On February 5, 2005, Shae Spencer Management LLC of Fairport, New York filed a trademark application to register PODCAST for an "online prerecorded radio program over the internet".On September 9, 2005, the United States Patent and Trademark Office rejected the application, citing Wikipedia's podcast entry as describing the history of the term.
As of September 20, 2005, known trademarks that attempted to capitalize on podcast include: Podcast Realty, GuidePod, PodGizmo, Pod-Casting, MyPod, Podvertiser, Podango, ePodcast, PodCabin, Podcaster, PodShop, PodKitchen, Podgram, GodPod and Podcast.
2006
On September 26, 2006, it was reported that Apple Computer started to crack down on businesses using the acronym "POD", in product and company names. Apple sent a cease-and-desist order that week to Podcast Ready, Inc., which markets an application known as "myPodder". Lawyers for Apple contended allegedly that the term "pod" has been used by the public to refer to Apple's music player so extensively that it falls under Apple's trademark cover. It was speculated that such activity was part of a bigger campaign for Apple to expand the scope of its existing iPod trademark, which included trademarking "IPODCAST", "IPOD", and "POD". On November 16, 2006, Apple Trademark Department returned a letter claiming Apple does not object to third party usage of "podcast" to refer to podcasting services and that Apple does not license the term(s).
2007
As of February 2007, there were 24 attempts to register trademarks containing the word "PODCAST" in United States, but only "PODCAST READY" from Podcast Ready, Inc. was approved.
DNA COMPUTING
DNA COMPUTING
• What is a DNA computer?
With advancement in technology and research we have come to know that millions of natural supercomputers exist inside living organisms, including our body. DNA (deoxyribonucleic acid) molecules, the material our genes are made of, have the potential to perform calculations many times faster than the world's most powerful human-built computers. DNA might one day be integrated into a computer chip to create a so-called biochip that will push computers even faster. DNA molecules have already been harnessed to perform complex mathematical problems. While still in their infancy, DNA computers will be capable of storing billions of times more data than our personal computer.
A DNA computer is a molecular computer that works biochemically. It "computes" using enzymes that react with DNA strands, causing chain reactions. The chain reactions act as a kind of simultaneous computing or parallel processing, whereby many possible solutions to a given problem can be presented simultaneously with the correct solution being one of the results.
• DNA Computing Technology-
In 1994, Leonard Adleman introduced the idea of using DNA to solve complex mathematical problems. Adleman, a computer scientist at the University of Southern California, came to the conclusion that DNA had computational potential after reading the book "Molecular Biology of the Gene," written by James Watson, who co-discovered the structure of DNA in 1953. In fact, DNA is very similar to a computer hard drive in how it stores permanent information about your genes.
Adleman outlined how to use DNA to solve a well-known mathematical problem, called the directed Hamilton Path problem, also known as the "traveling salesman" problem. The goal of the problem is to find the shortest route between a number of cities, going through each city only once. As we add more cities to the problem, the problem becomes more difficult. Adleman chose to find the shortest route between seven cities.
The steps taken in the Adleman DNA computer experiment are:-
1. Strands of DNA represent the seven cities. In genes, genetic coding is represented by the letters A, T, C and G. Some sequence of these four letters represented each city and possible flight path.
2. These molecules are then mixed in a test tube, with some of these DNA strands sticking together. A chain of these strands represents a possible answer.
3. Within a few seconds, all of the possible combinations of DNA strands, which represent answers, are created in the test tube.
4. Adleman eliminates the wrong molecules through chemical reactions, which leaves behind only the flight paths that connect all seven cities.
The following algorithm solves the Hamilton Path problem, regardless of the type of computer used:
1. Generate all possible routes.
2. Select itineraries that start with the proper city and end with the final city.
3. Select itineraries with the correct number of cities.
4. Select itineraries that contain each city only once.
The success of the Adleman DNA computer proves that DNA can be used to calculate complex mathematical problems. However, this early DNA computer is far from challenging silicon-based computers in terms of speed. The Adleman DNA computer created a group of possible answers very quickly, but it took days for Adleman to narrow down the possibilities. Another drawback of his DNA computer is that it requires human assistance. The goal of the DNA computing field is to create a device that can work independent of human involvement.
Three years after Adelman’s experiment, researchers at the University of Rochester developed logic gates made of DNA. Logic gates are a vital part of how your computer carries out functions that you command it to do. These gates convert binary code moving through the computer into a series of signals that the computer uses to perform operations. Currently, logic gates interpret input signals from silicon transistors, and convert those signals into an output signal that allows the computer to perform complex functions.
The Rochester team's DNA logic gates are the first step toward creating a computer that has a structure similar to that of an electronic PC. Instead of using electrical signals to perform logical operations, these DNA logic gates rely on DNA code. They detect fragments of genetic material as input, splice together these fragments and form a single output. For instance, a genetic gate called the "And gate" links two DNA inputs by chemically binding them so they're locked in an end-to-end structure, similar to the way two Legos might be fastened by a third Lego between them. The researchers believe that these logic gates might be combined with DNA microchips to create a breakthrough in DNA computing.
DNA computer components -- logic gates and biochips -- will take years to develop into a practical, workable DNA computer. If such a computer is ever built, scientists say that it will be more compact, accurate and efficient than conventional computers
• Comparison between silicon & DNA computers-
As long as there are cellular organisms, there will always be a supply of DNA.
The large supply of DNA makes it a cheap resource.
Unlike the toxic materials used to make traditional microprocessors, DNA biochips can be made cleanly.
DNA computers are many times smaller than today's computers.
DNA's key advantage is that it will make computers smaller than any computer that has come before them, while at the same time holding more data. One pound of DNA has the capacity to store more information than all the electronic computers ever built; and the computing power of a teardrop-sized DNA computer, using the DNA logic gates, will be more powerful than the world's most powerful supercomputer. More than 10 trillion DNA molecules can fit into an area no larger than 1 cubic centimeter (0.06 cubic inches). With this small amount of DNA, a computer would be able to hold 10 terabytes of data, and perform 10 trillion calculations at a time. By adding more DNA, more calculations could be performed.
Unlike conventional computers, DNA computers perform calculations parallel to other calculations. Conventional computers operate linearly, taking on tasks one at a time. It is parallel computing that allows DNA to solve complex mathematical problems in hours, whereas it might take electrical computers hundreds of years to complete them.
• Olympus Develops DNA computer-
In starting of year 2002 the Olympus Optical Co. Ltd. Developed what the company claimed the commercially practical DNA computer that specializes in gene analysis. The computer was developed in conjunction with Akira Toyama, an assistant professor at Tokyo University.
Gene analysis has been usually done manually, by arranging DNA fragments and observing the chemical reactions. But that was time-consuming, said Satoshi Ikuta, a spokesman of Olympus Optical. When DNA computing is applied to gene analysis what used to take three days can now be done in six hours, he said. DNA computing also allows scientists to observe chemical reaction that occur simultaneously, lowering the research costs.
The bottleneck was that engineers were required to have expert knowledge in two specific fields, in order to develop a gene analysis DNA computer.
I. Information Processing Engineering
II. Molecular Biology
This is called genome informatics as a whole.
To achieve this, the company formed a joint venture Novous Gene Inc. Which specializes in genome informatics, in February 2001? The principles for a DNA computer that works for gene analysis were provided by Tokyo University’s Toyama.
The computer Olympus Optical has developed is divided into two sections, a molecular calculation component and an electronic calculation component. He former calculates DNA combinations of molecules, implements chemical reactions, searches and pulls out the right DNA the latter executes processing programs and analyzes these results.
The company was started gene analysis using the DNA computer on a trial basis for a year, and form this year hopes to offer the service on a commercial basis for researchers.
• Example of an application-
For example, a DNA computer as a tiny liquid computer —- DNA in solution -- that could conceivably do such things as monitor the blood in vitro. If a chemical imbalance were detected, the DNA computer might synthesize the needed replacement and release it into the blood to restore equilibrium. It might also eliminate unwanted chemicals by disassembling them at the molecular level, or monitor DNA for anomalies. This type of science is referred to as nanoscience, or nanotechnology, and the DNA computer is essentially a nanocomputer.
• Conclusion-
The DNA computer is only in its early stages of development. Though rudimentary nanocomputer perform computations, human interaction is still required to separate the correct answer out by ridding the DNA computer solution of all false answers. This is accomplished through a series of chemical steps. However, experts are encouraged by the innate abilities of a DNA computer and see a bright future.
• What is a DNA computer?
With advancement in technology and research we have come to know that millions of natural supercomputers exist inside living organisms, including our body. DNA (deoxyribonucleic acid) molecules, the material our genes are made of, have the potential to perform calculations many times faster than the world's most powerful human-built computers. DNA might one day be integrated into a computer chip to create a so-called biochip that will push computers even faster. DNA molecules have already been harnessed to perform complex mathematical problems. While still in their infancy, DNA computers will be capable of storing billions of times more data than our personal computer.
A DNA computer is a molecular computer that works biochemically. It "computes" using enzymes that react with DNA strands, causing chain reactions. The chain reactions act as a kind of simultaneous computing or parallel processing, whereby many possible solutions to a given problem can be presented simultaneously with the correct solution being one of the results.
• DNA Computing Technology-
In 1994, Leonard Adleman introduced the idea of using DNA to solve complex mathematical problems. Adleman, a computer scientist at the University of Southern California, came to the conclusion that DNA had computational potential after reading the book "Molecular Biology of the Gene," written by James Watson, who co-discovered the structure of DNA in 1953. In fact, DNA is very similar to a computer hard drive in how it stores permanent information about your genes.
Adleman outlined how to use DNA to solve a well-known mathematical problem, called the directed Hamilton Path problem, also known as the "traveling salesman" problem. The goal of the problem is to find the shortest route between a number of cities, going through each city only once. As we add more cities to the problem, the problem becomes more difficult. Adleman chose to find the shortest route between seven cities.
The steps taken in the Adleman DNA computer experiment are:-
1. Strands of DNA represent the seven cities. In genes, genetic coding is represented by the letters A, T, C and G. Some sequence of these four letters represented each city and possible flight path.
2. These molecules are then mixed in a test tube, with some of these DNA strands sticking together. A chain of these strands represents a possible answer.
3. Within a few seconds, all of the possible combinations of DNA strands, which represent answers, are created in the test tube.
4. Adleman eliminates the wrong molecules through chemical reactions, which leaves behind only the flight paths that connect all seven cities.
The following algorithm solves the Hamilton Path problem, regardless of the type of computer used:
1. Generate all possible routes.
2. Select itineraries that start with the proper city and end with the final city.
3. Select itineraries with the correct number of cities.
4. Select itineraries that contain each city only once.
The success of the Adleman DNA computer proves that DNA can be used to calculate complex mathematical problems. However, this early DNA computer is far from challenging silicon-based computers in terms of speed. The Adleman DNA computer created a group of possible answers very quickly, but it took days for Adleman to narrow down the possibilities. Another drawback of his DNA computer is that it requires human assistance. The goal of the DNA computing field is to create a device that can work independent of human involvement.
Three years after Adelman’s experiment, researchers at the University of Rochester developed logic gates made of DNA. Logic gates are a vital part of how your computer carries out functions that you command it to do. These gates convert binary code moving through the computer into a series of signals that the computer uses to perform operations. Currently, logic gates interpret input signals from silicon transistors, and convert those signals into an output signal that allows the computer to perform complex functions.
The Rochester team's DNA logic gates are the first step toward creating a computer that has a structure similar to that of an electronic PC. Instead of using electrical signals to perform logical operations, these DNA logic gates rely on DNA code. They detect fragments of genetic material as input, splice together these fragments and form a single output. For instance, a genetic gate called the "And gate" links two DNA inputs by chemically binding them so they're locked in an end-to-end structure, similar to the way two Legos might be fastened by a third Lego between them. The researchers believe that these logic gates might be combined with DNA microchips to create a breakthrough in DNA computing.
DNA computer components -- logic gates and biochips -- will take years to develop into a practical, workable DNA computer. If such a computer is ever built, scientists say that it will be more compact, accurate and efficient than conventional computers
• Comparison between silicon & DNA computers-
As long as there are cellular organisms, there will always be a supply of DNA.
The large supply of DNA makes it a cheap resource.
Unlike the toxic materials used to make traditional microprocessors, DNA biochips can be made cleanly.
DNA computers are many times smaller than today's computers.
DNA's key advantage is that it will make computers smaller than any computer that has come before them, while at the same time holding more data. One pound of DNA has the capacity to store more information than all the electronic computers ever built; and the computing power of a teardrop-sized DNA computer, using the DNA logic gates, will be more powerful than the world's most powerful supercomputer. More than 10 trillion DNA molecules can fit into an area no larger than 1 cubic centimeter (0.06 cubic inches). With this small amount of DNA, a computer would be able to hold 10 terabytes of data, and perform 10 trillion calculations at a time. By adding more DNA, more calculations could be performed.
Unlike conventional computers, DNA computers perform calculations parallel to other calculations. Conventional computers operate linearly, taking on tasks one at a time. It is parallel computing that allows DNA to solve complex mathematical problems in hours, whereas it might take electrical computers hundreds of years to complete them.
• Olympus Develops DNA computer-
In starting of year 2002 the Olympus Optical Co. Ltd. Developed what the company claimed the commercially practical DNA computer that specializes in gene analysis. The computer was developed in conjunction with Akira Toyama, an assistant professor at Tokyo University.
Gene analysis has been usually done manually, by arranging DNA fragments and observing the chemical reactions. But that was time-consuming, said Satoshi Ikuta, a spokesman of Olympus Optical. When DNA computing is applied to gene analysis what used to take three days can now be done in six hours, he said. DNA computing also allows scientists to observe chemical reaction that occur simultaneously, lowering the research costs.
The bottleneck was that engineers were required to have expert knowledge in two specific fields, in order to develop a gene analysis DNA computer.
I. Information Processing Engineering
II. Molecular Biology
This is called genome informatics as a whole.
To achieve this, the company formed a joint venture Novous Gene Inc. Which specializes in genome informatics, in February 2001? The principles for a DNA computer that works for gene analysis were provided by Tokyo University’s Toyama.
The computer Olympus Optical has developed is divided into two sections, a molecular calculation component and an electronic calculation component. He former calculates DNA combinations of molecules, implements chemical reactions, searches and pulls out the right DNA the latter executes processing programs and analyzes these results.
The company was started gene analysis using the DNA computer on a trial basis for a year, and form this year hopes to offer the service on a commercial basis for researchers.
• Example of an application-
For example, a DNA computer as a tiny liquid computer —- DNA in solution -- that could conceivably do such things as monitor the blood in vitro. If a chemical imbalance were detected, the DNA computer might synthesize the needed replacement and release it into the blood to restore equilibrium. It might also eliminate unwanted chemicals by disassembling them at the molecular level, or monitor DNA for anomalies. This type of science is referred to as nanoscience, or nanotechnology, and the DNA computer is essentially a nanocomputer.
• Conclusion-
The DNA computer is only in its early stages of development. Though rudimentary nanocomputer perform computations, human interaction is still required to separate the correct answer out by ridding the DNA computer solution of all false answers. This is accomplished through a series of chemical steps. However, experts are encouraged by the innate abilities of a DNA computer and see a bright future.
Thin Clients
Thin Clients
A thin client, sometimes also called a lean or slim client is a clients computer or client software in client server architecture networks which depends primarily on the central server for processing activities, and mainly focuses on conveying input and output between the user and the remote server. In contrast, a thick or fat clients does as much processing as possible and passes only data for communication and storage to the server.
Introduction :
The thin client is a PC with less of everything. In designing a computer system, there are decisions to be made about processing, storage, software and user interface. With the reality of reliable high-speed networking, it is possible to change the location of any of these with respect to the others. A gigabit/s network is faster than a PCI bus and many hard drives, so each function can be in a different location. Choices will be made depending on the total cost, cost of operation, reliability, performance and usability of the system. The thin client is closely connected to the user interface.
In a thin client/server system, the only software that is installed on the thin client is the user interface, certain frequently used applications, and a networked operating system. This software can be loaded from a local drive, the server at boot, or as needed. By simplifying the load on the thin client, it can be a very small, low-powered device giving lower costs to purchase and to operate per seat. The server, or a cluster of servers has the full weight of all the applications, services, and data. By keeping a few servers busy and many thin clients lightly loaded, users can expect easier system management and lower costs, as well as all the advantages of networked computing: central storage/backup and easier security
History :
What are now called thin clients were originally called "graphical terminals" when they first appeared, because they were a natural development of the text terminal that had gone before them. Text terminals are essentially a thin client for computers that use text for input and output with humans, but are generally not classified as such because they come from an earlier computing era. Today's thin clients must give the user the experience of running the graphical, high-computation programs that are in use today.
It is said that the term "thin client" started to be used instead of "graphical terminal" for the following reasons:
1) When thin clients started to come back into vogue,fat clients had long been the norm in most environments. Many IT workers and managers used to working with fat clients such as PCs and Macs would have been unfamiliar with the term "graphical terminal".
2) As a marketing term, it sounds short and snappy – and also, importantly, it made the technology sound innovative and technologically advanced, even though it was neither – X terminals had been acting as thin clients years before the term was widely used in the IT industry.
3) "Thin Client" also reflects the fact that most of these devices leave out much of the hardware found in typical PCs, such as hard drive, cooling fan and much of the RAM.
Definitions
A thin client is a network computer without a user writable long term storage device, which, in client/server applications, is designed to be especially small so that the bulk of the data processing occurs on the server. The embedded OS in a thin client is stored in a "flash drive", in a Disk on Module(DOM), or is downloaded over the network at boot-up. The embedded OS in a thin client usually uses some kind of write filter so that the OS and its configuration can only be changed by administrators.
Industrial thin client applications
Since 2006 there has been a growing interest in using Thin Client technology in hazardous areas, such as oil & gas exploration, military mobile use to monitor gen sets and mobile missile installations, and in industry in Zone 1 areas where hardened industrial computers can be prohibitively expensive. Thin Client hardware is easier to seal against environmental hazards and contamination, and can sometimes withstand a wider temperature and vibration level, due to simplified components and lack of moving parts, such as hard drives and cooling fans.
breaches, lesser weight and greater mobility, and lower incidence of OS failures. Some Thin Client solutions (such as ACP's ThinManager Ready Thin Clients) are tightly coupled with specialized management software that enhances the basic features offered by server operating systems.
Thin Client products enable easy-to-employ industry standard network creation and control at hazardous area zones for less cost and with less risk of failure than full computer systems. In fact, in the first quarter of 2007, mandates have been created by the US Armed Forces to look at Thin Client solutions in all field applications. The military is primarily interested in Thin Client technology in the field due to its improved cost control, more robust construction, less vulnerability to failure and security.
Advantages of thin clients
1) Lower IT administration costs. Thin clients are managed almost entirely at the server. The hardware has fewer points of failure and the client is simpler (and often lacks permanent storage), providing protection from malware.
2) Easier to secure. Thin clients can be designed so that no application data ever resides on the client (just whatever is displayed), centralizing malware protection and reducing the risks of physical data theft.
3) Enhanced data security. If a thin-client device suffer serious mishap or industrial accident, no data will be lost, as it resides on the terminal server and not the point-of-operation device.
4) Lower hardware costs. Thin client hardware is generally cheaper because it does not contain a disk, application memory, or a powerful processor. They also generally have a longer period before requiring an upgrade or becoming obsolete.
5) Less energy consumption. Dedicated thin client hardware has much lower energy consumption than typical thick client PCs. This not only reduces energy costs but may mean that in some cases air-conditioning systems are not required or need not be upgraded which can be a significant cost saving and contribute to achieving energy saving targets. However, more powerful servers and communications are required.
6) Easier hardware failure management. If a thin client fails, replacement can simply be swapped in while the client is repaired; the user is not inconvenienced because their data is not on the client
7) Operable in Hostile Environments. Most thin clients have no moving parts so can be used in dusty environments without the worry of PC fans clogging up and overheating and burning out the PC.
8)Lower noise. The aforementioned removal of fans reduces the noise produced by the unit. This can create a more pleasant and productive working environment.
9) Less wasted hardware. Computer hardware contains heavy metals and plastics and requires energy and resources to create. Thin clients can remain in service longer and ultimately produce less surplus computer hardware than an equivalent thick client installation because they can be made with no moving parts.
10) More efficient use of computing resources. A typical thick-client will be specified to cope with the maximum load the user needs, which can be inefficient at times when it is not used. In contrast, thin clients only use the exact amount of computing resources required by the current task – in a large network, there is a high probability the load from each user will fluctuate in a different cycle to that of another use.
Device for running a thin client application program
"Thin client" has also been used as a marketing term for computer appliances designed to run thin client software. The SMARTSTATION THIN CLIENT, NEC US110, IGEL Technology Universal Desktops, Wyse Winterms, Neoware's acquired by Hewlett-Packard HP Compaq t-series, Chip PC Jack PC and Xtreme PC Series, SaaS style Nexterm NEXterminal, Sabertooth TC , TC3ProjectACP's ThinManager Ready Thin Clients, X terminal, ClearCube, Koolu, LISCON TCs, ThinCan or web kiosk might be considered thin clients in this sense.
A thin client, sometimes also called a lean or slim client is a clients computer or client software in client server architecture networks which depends primarily on the central server for processing activities, and mainly focuses on conveying input and output between the user and the remote server. In contrast, a thick or fat clients does as much processing as possible and passes only data for communication and storage to the server.
Introduction :
The thin client is a PC with less of everything. In designing a computer system, there are decisions to be made about processing, storage, software and user interface. With the reality of reliable high-speed networking, it is possible to change the location of any of these with respect to the others. A gigabit/s network is faster than a PCI bus and many hard drives, so each function can be in a different location. Choices will be made depending on the total cost, cost of operation, reliability, performance and usability of the system. The thin client is closely connected to the user interface.
In a thin client/server system, the only software that is installed on the thin client is the user interface, certain frequently used applications, and a networked operating system. This software can be loaded from a local drive, the server at boot, or as needed. By simplifying the load on the thin client, it can be a very small, low-powered device giving lower costs to purchase and to operate per seat. The server, or a cluster of servers has the full weight of all the applications, services, and data. By keeping a few servers busy and many thin clients lightly loaded, users can expect easier system management and lower costs, as well as all the advantages of networked computing: central storage/backup and easier security
History :
What are now called thin clients were originally called "graphical terminals" when they first appeared, because they were a natural development of the text terminal that had gone before them. Text terminals are essentially a thin client for computers that use text for input and output with humans, but are generally not classified as such because they come from an earlier computing era. Today's thin clients must give the user the experience of running the graphical, high-computation programs that are in use today.
It is said that the term "thin client" started to be used instead of "graphical terminal" for the following reasons:
1) When thin clients started to come back into vogue,fat clients had long been the norm in most environments. Many IT workers and managers used to working with fat clients such as PCs and Macs would have been unfamiliar with the term "graphical terminal".
2) As a marketing term, it sounds short and snappy – and also, importantly, it made the technology sound innovative and technologically advanced, even though it was neither – X terminals had been acting as thin clients years before the term was widely used in the IT industry.
3) "Thin Client" also reflects the fact that most of these devices leave out much of the hardware found in typical PCs, such as hard drive, cooling fan and much of the RAM.
Definitions
A thin client is a network computer without a user writable long term storage device, which, in client/server applications, is designed to be especially small so that the bulk of the data processing occurs on the server. The embedded OS in a thin client is stored in a "flash drive", in a Disk on Module(DOM), or is downloaded over the network at boot-up. The embedded OS in a thin client usually uses some kind of write filter so that the OS and its configuration can only be changed by administrators.
Industrial thin client applications
Since 2006 there has been a growing interest in using Thin Client technology in hazardous areas, such as oil & gas exploration, military mobile use to monitor gen sets and mobile missile installations, and in industry in Zone 1 areas where hardened industrial computers can be prohibitively expensive. Thin Client hardware is easier to seal against environmental hazards and contamination, and can sometimes withstand a wider temperature and vibration level, due to simplified components and lack of moving parts, such as hard drives and cooling fans.
breaches, lesser weight and greater mobility, and lower incidence of OS failures. Some Thin Client solutions (such as ACP's ThinManager Ready Thin Clients) are tightly coupled with specialized management software that enhances the basic features offered by server operating systems.
Thin Client products enable easy-to-employ industry standard network creation and control at hazardous area zones for less cost and with less risk of failure than full computer systems. In fact, in the first quarter of 2007, mandates have been created by the US Armed Forces to look at Thin Client solutions in all field applications. The military is primarily interested in Thin Client technology in the field due to its improved cost control, more robust construction, less vulnerability to failure and security.
Advantages of thin clients
1) Lower IT administration costs. Thin clients are managed almost entirely at the server. The hardware has fewer points of failure and the client is simpler (and often lacks permanent storage), providing protection from malware.
2) Easier to secure. Thin clients can be designed so that no application data ever resides on the client (just whatever is displayed), centralizing malware protection and reducing the risks of physical data theft.
3) Enhanced data security. If a thin-client device suffer serious mishap or industrial accident, no data will be lost, as it resides on the terminal server and not the point-of-operation device.
4) Lower hardware costs. Thin client hardware is generally cheaper because it does not contain a disk, application memory, or a powerful processor. They also generally have a longer period before requiring an upgrade or becoming obsolete.
5) Less energy consumption. Dedicated thin client hardware has much lower energy consumption than typical thick client PCs. This not only reduces energy costs but may mean that in some cases air-conditioning systems are not required or need not be upgraded which can be a significant cost saving and contribute to achieving energy saving targets. However, more powerful servers and communications are required.
6) Easier hardware failure management. If a thin client fails, replacement can simply be swapped in while the client is repaired; the user is not inconvenienced because their data is not on the client
7) Operable in Hostile Environments. Most thin clients have no moving parts so can be used in dusty environments without the worry of PC fans clogging up and overheating and burning out the PC.
8)Lower noise. The aforementioned removal of fans reduces the noise produced by the unit. This can create a more pleasant and productive working environment.
9) Less wasted hardware. Computer hardware contains heavy metals and plastics and requires energy and resources to create. Thin clients can remain in service longer and ultimately produce less surplus computer hardware than an equivalent thick client installation because they can be made with no moving parts.
10) More efficient use of computing resources. A typical thick-client will be specified to cope with the maximum load the user needs, which can be inefficient at times when it is not used. In contrast, thin clients only use the exact amount of computing resources required by the current task – in a large network, there is a high probability the load from each user will fluctuate in a different cycle to that of another use.
Device for running a thin client application program
"Thin client" has also been used as a marketing term for computer appliances designed to run thin client software. The SMARTSTATION THIN CLIENT, NEC US110, IGEL Technology Universal Desktops, Wyse Winterms, Neoware's acquired by Hewlett-Packard HP Compaq t-series, Chip PC Jack PC and Xtreme PC Series, SaaS style Nexterm NEXterminal, Sabertooth TC , TC3ProjectACP's ThinManager Ready Thin Clients, X terminal, ClearCube, Koolu, LISCON TCs, ThinCan or web kiosk might be considered thin clients in this sense.
INTEL ATOM PROCESSOR-
INTEL ATOM PROCESSOR-
Pack is an 47 million transistor on a single chip measuring less then 26mm2. The Intel Atom processor is based on entirely new hafnium-based 45nm microarchitecture. Representing Intel's smallest and lowest power processor the Intel Atom processor enables a new generation of powerful and energy-efficient Mobile Internet Devices (MIDs) and a new category of simple devices for the internet called netbooks and nettops that will be available at affordable prices. The Intel Atom processor provides:
• Performance for a great internet experience in a range of sub 1 watt to 4 watt thermal power envelope based on industry leading benchmarks (EEMBC) and web page rendering performance
• Greater energy efficiency for mobile devices enabled by incredibly low average power and idle power, scaling performance from 800MHz to 1.86GHz
• Power-optimized front side bus of up to 533MHz for faster data transfer on demanding mobile applications
• Scalable performance and increased power efficiency with multi-threading support
• Improved performance on multimedia and gaming applications with support for Streaming SIMD Extensions 3 (SSE3)
• Improved power management with new Deep Power Down (C6) enabled on the Intel Atom processor Z5xx series for MIDs, and extended C4 states enabled on Intel Atom processor N270 for netbooks, in addition to non-grid clock distribution, clock gating, CMOS bus mode, and other power saving architectural features
• Low TDP enabled by improved power management technologies delivering high performance to run the real Internet and a broad range of software applications
Key Features
It’s not a laptop; it’s a netbook.
Based on a groundbreaking low-power microarchitecture, the Intel Atom processor powers small, sleek Internet devices designed to go where you go. To stay online , keep in touch with friends or follow favorite Web sites, netbooks with the Intel Atom processor deliver convenience and flexibility in an incredibly small, amazingly smart package.
Just the Internet.
The Intel Atom processor is engineered to deliver the performance needed to keep surfing, blogging, listening to music, watching video or communicating with the world and its low-power design enables extended battery life to stay online and to go longer.
All in the palm of hand
It’s easy to bring the Internet to more places with the Intel Atom processor. Groundbreaking silicon design and new micro-architecture enable blazing-fast performance in small wireless handheld devices, makes it easy to an amazing Internet experience that can fit in pocket.
Breathtaking graphics
With the Intel Atom processor, to stream video and enjoy favorite online entertainment is yery easy, so there’s no need to settle for anything less than a full Internet experience, even in an incredibly small package.
Long battery life keeps entertained and productive
Power-efficient design enables extended battery life to keep on surfing, blogging, listening to music, watching video and communicating with the world.
Using the world's smallest transistors, the Intel Atom processor takes the next big leap in ultra-small and powerful computing with performance optimized technologies that are also more environmentally responsible. Using lead-free and halogen-free manufacturing, the Intel Atom processor is the first in Intel's lineup to eliminate halogen and lead products altogether.
Pack is an 47 million transistor on a single chip measuring less then 26mm2. The Intel Atom processor is based on entirely new hafnium-based 45nm microarchitecture. Representing Intel's smallest and lowest power processor the Intel Atom processor enables a new generation of powerful and energy-efficient Mobile Internet Devices (MIDs) and a new category of simple devices for the internet called netbooks and nettops that will be available at affordable prices. The Intel Atom processor provides:
• Performance for a great internet experience in a range of sub 1 watt to 4 watt thermal power envelope based on industry leading benchmarks (EEMBC) and web page rendering performance
• Greater energy efficiency for mobile devices enabled by incredibly low average power and idle power, scaling performance from 800MHz to 1.86GHz
• Power-optimized front side bus of up to 533MHz for faster data transfer on demanding mobile applications
• Scalable performance and increased power efficiency with multi-threading support
• Improved performance on multimedia and gaming applications with support for Streaming SIMD Extensions 3 (SSE3)
• Improved power management with new Deep Power Down (C6) enabled on the Intel Atom processor Z5xx series for MIDs, and extended C4 states enabled on Intel Atom processor N270 for netbooks, in addition to non-grid clock distribution, clock gating, CMOS bus mode, and other power saving architectural features
• Low TDP enabled by improved power management technologies delivering high performance to run the real Internet and a broad range of software applications
Key Features
It’s not a laptop; it’s a netbook.
Based on a groundbreaking low-power microarchitecture, the Intel Atom processor powers small, sleek Internet devices designed to go where you go. To stay online , keep in touch with friends or follow favorite Web sites, netbooks with the Intel Atom processor deliver convenience and flexibility in an incredibly small, amazingly smart package.
Just the Internet.
The Intel Atom processor is engineered to deliver the performance needed to keep surfing, blogging, listening to music, watching video or communicating with the world and its low-power design enables extended battery life to stay online and to go longer.
All in the palm of hand
It’s easy to bring the Internet to more places with the Intel Atom processor. Groundbreaking silicon design and new micro-architecture enable blazing-fast performance in small wireless handheld devices, makes it easy to an amazing Internet experience that can fit in pocket.
Breathtaking graphics
With the Intel Atom processor, to stream video and enjoy favorite online entertainment is yery easy, so there’s no need to settle for anything less than a full Internet experience, even in an incredibly small package.
Long battery life keeps entertained and productive
Power-efficient design enables extended battery life to keep on surfing, blogging, listening to music, watching video and communicating with the world.
Using the world's smallest transistors, the Intel Atom processor takes the next big leap in ultra-small and powerful computing with performance optimized technologies that are also more environmentally responsible. Using lead-free and halogen-free manufacturing, the Intel Atom processor is the first in Intel's lineup to eliminate halogen and lead products altogether.
3G
Introduction:
International Mobile Telecommunications-2000 (IMT-2000), better known as 3G or 3rd Generation, is a family of standards for mobile telecommunications defined by the International Telecommunication Union, which includes GSM EDGE, UMTS, and CDMA2000 as well as DECT and WiMAX. Services include wide-area wireless voice telephone, video calls, and wireless data, all in a mobile environment. Compared to 2G and 2.5G services, 3G allows simultaneous use of speech and data services and higher data rates (up to 14.0 Mbit/s on the downlink and 5.8 Mbit/s on the uplink with HSPA+). Thus, 3G networks enable network operators to offer users a wider range of more advanced services while achieving greater network capacity through improved spectral efficiency.
The International Telecommunication Union (ITU) defined the third generation (3G) of mobile telephony standards – IMT-2000 – to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM (the current most popular cellular phone standard) could deliver not only voice, but also circuit-switched data at download rates up to 14.4 kbps. But to support mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater bandwidths.
History
The first pre-commercial 3G network was launched by NTT DoCoMo in Japan branded FOMA, in May 2001 on a pre-release of W-CDMA technology.[7] The first commercial launch of 3G was also by NTT DoCoMo in Japan on October 1, 2001, although it was initially somewhat limited in scope;[8][9] broader availability was delayed by apparent concerns over reliability.[10] The second network to go commercially live was by SK Telecom in South Korea on the 1xEV-DO technology in January 2002. By May 2002 the second South Korean 3G network was by KTF on EV-DO and thus the Koreans were the first to see competition among 3G operators.
The first European pre-commercial network was at the Isle of Man by Manx Telecom, the operator then owned by British Telecom, and the first commercial network in Europe was opened for business by Telenor in December 2001 with no commercial handsets and thus no paying customers. These were both on the W-CDMA technology.
The first commercial United States 3G network was by Monet Mobile Networks, on CDMA2000 1x EV-DO technology, but this network provider later shut down operations. The second 3G network operator in the USA was Verizon Wireless in October 2003 also on CDMA2000 1x EV-DO, and this network has grown strongly since then.
The first pre-commercial demonstration network in the southern hemisphere was built in Adelaide, South Australia by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded as Three in March 2003.
In December 2007, 190 3G networks were operating in 40 countries and 154 HSDPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada and the USA, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks.
In Europe, mass market commercial 3G services were introduced starting in March 2003 by 3 (Part of Hutchison Whampoa) in the UK and Italy. The European Union Council suggested that the 3G operators should cover 80% of the European national populations by the end of 2005.
Roll-out of 3G networks was delayed in some countries by the enormous costs of additional spectrum licensing fees. In many countries, 3G networks do not use the same radio frequencies as 2G, so mobile operators must build entirely new networks and license entirely new frequencies; an exception is the United States where carriers operate 3G service in the same frequencies as other services. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. Other delays were due to the expenses of upgrading equipment for the new systems.
By June 2007 the 200 millionth 3G subscriber had been connected. Out of 3 billion mobile phone subscriptions worldwide this is only 6.7%. In the countries where 3G was launched first - Japan and South Korea - 3G penetration is over 70%.[11] In Europe the leading country is Italy with a third of its subscribers migrated to 3G. Other leading countries by 3G migration include UK, Austria, Australia and Singapore at the 20% migration level. A confusing statistic is counting CDMA 2000 1x RTT customers as if they were 3G customers. If using this definition, then the total 3G subscriber base would be 475 million at June 2007 and 15.8% of all subscribers worldwide.
Still, several developing countries have not awarded 3G licenses and customers await 3G services. China delayed its decisions on 3G for many years, mainly because of their Government's delay in establishing well defined standards.[12] China announced in May 2008, that the telecoms sector was re-organized and three 3G networks would be allocated so that the largest mobile operator, China Mobile, would retain its GSM customer base. China Unicom would retain its GSM customer base but relinquish its CDMA2000 customer base, and launch 3G on the globally leading WCDMA (UMTS) standard. The CDMA2000 customers of China Unicom would go to China Telecom, which would then launch 3G on the CDMA 1x EV-DO standard. This meant that China would have all three main cellular technology 3G standards in commercial use. Finally in January 2009, Ministry of industry and Information Technology of China has awarded licenses of all three standards,TD-SCDMA to China Mobile, WCDMA to China Unicom and CDMA2000 to China Telecom.
In November 2008, Turkey has auctioned four IMT 2000/UMTS standard 3G licenses with 45, 40, 35 and 25 MHz top frequencies. Turkcell has won the 45MHz band with its €358 million offer followed by Vodafone and Avea leasing the 40 and 35MHz frequencies respectively for 20 years. The 25MHz top frequency license remains to be auctioned.
The first African use of 3G technology was a 3G videocall made in Johannesburg on the Vodacom network in November 2004. The first commercial launch of 3G in Africa was by EMTEL in Mauritius on the W-CDMA standard. In north African Morocco in late March 2006, a 3G service was provided by the new company Wana.
Telus first introduced 3G services in Canada in 2005. Rogers Wireless began implementing 3G HSDPA services in eastern Canada early 2007 in the form of Rogers Vision. Fido Solutions and Rogers Wireless now offer 3G service in most urban centres.
T-Mobile, a major Telecommunication services provider has recently rolled out a list of over 120 U.S. cities which will be provided with 3G Network coverage in the year 2009.[13]
In 2008, India entered into 3G Mobile arena with the launch of 3G enabled Mobile services by Mahanagar Telephone Nigam Limited (MTNL). MTNL is the first Mobile operator in India to launch 3G services.
Features
Data rates
ITU has not provided a clear definition of the data rate users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle,"[14] the ITU does not actually clearly specify minimum or average rates or what modes of the interfaces qualify as 3G, so various rates are sold as 3G intended to meet customers expectations of broadband data.
Security
3G networks offer a greater degree of security than 2G predecessors. By allowing the UE to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block crypto instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified[citation needed].
In addition to the 3G network infrastructure security, end to end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property.
Evolution from 2G
2G networks were built mainly for voice services and slow data transmission.
From 2G to 2.5G
The first major step in the evolution to 3G occurred with the introduction of General Packet Radio Service (GPRS). So the cellular services combined with GPRS became' 2.5G.'
GPRS could provide data rates from 56 kbit/s up to 114 kbit/s. It can be used for services such as Wireless Application Protocol (WAP) access, Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access. GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is utilizing the capacity or is in an idle state.
From 2.5G to 2.75G (EDGE)
GPRS networks evolved to EDGE networks with the introduction of 8PSK encoding. Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM. EDGE can be considered a 3G radio technology and is part of ITU's 3G definition, but is most frequently referred to as 2.75G. EDGE was deployed on GSM networks beginning in 2003—initially by Cingular (now AT&T) in the United States.
EDGE is standardized by 3GPP as part of the GSM family, and it is an upgrade that provides a potential three-fold increase in capacity of GSM/GPRS networks. The specification achieves higher data-rates by switching to more sophisticated methods of coding (8PSK), within existing GSM timeslots.
Evolution towards 4G
Both 3GPP and 3GPP2 are currently working on further extensions to 3G standards, named Long Term Evolution and Ultra Mobile Broadband, respectively. Being based on an all-IP network infrastructure and using advanced wireless technologies such as MIMO, these specifications already display features characteristic for IMT-Advanced (4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre-4G.
3GPP plans to meet the 4G goals with LTE Advanced, whereas Qualcomm has halted development of UMB in favour of the LTE family.[5]
Issues
Although 3G was successfully introduced to users across the world, some issues are debated by 3G providers and users:
• Expensive input fees for the 3G service licenses in some jurisdictions
• Differences in licensing terms between states
• Level of debt incurred by some telecommunication companies, which makes investment in 3G difficult
• Lack of state support for financially troubled operators
• Cost of 3G phones
• Lack of coverage in some areas
• Demand for broadband services in a hand-held device
• Battery life of 3G phones
References:
• http://wikipedia.org
• http://www.google.co.in
• http://www.wikinvest.com
• http://www.mobilein.com
International Mobile Telecommunications-2000 (IMT-2000), better known as 3G or 3rd Generation, is a family of standards for mobile telecommunications defined by the International Telecommunication Union, which includes GSM EDGE, UMTS, and CDMA2000 as well as DECT and WiMAX. Services include wide-area wireless voice telephone, video calls, and wireless data, all in a mobile environment. Compared to 2G and 2.5G services, 3G allows simultaneous use of speech and data services and higher data rates (up to 14.0 Mbit/s on the downlink and 5.8 Mbit/s on the uplink with HSPA+). Thus, 3G networks enable network operators to offer users a wider range of more advanced services while achieving greater network capacity through improved spectral efficiency.
The International Telecommunication Union (ITU) defined the third generation (3G) of mobile telephony standards – IMT-2000 – to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM (the current most popular cellular phone standard) could deliver not only voice, but also circuit-switched data at download rates up to 14.4 kbps. But to support mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater bandwidths.
History
The first pre-commercial 3G network was launched by NTT DoCoMo in Japan branded FOMA, in May 2001 on a pre-release of W-CDMA technology.[7] The first commercial launch of 3G was also by NTT DoCoMo in Japan on October 1, 2001, although it was initially somewhat limited in scope;[8][9] broader availability was delayed by apparent concerns over reliability.[10] The second network to go commercially live was by SK Telecom in South Korea on the 1xEV-DO technology in January 2002. By May 2002 the second South Korean 3G network was by KTF on EV-DO and thus the Koreans were the first to see competition among 3G operators.
The first European pre-commercial network was at the Isle of Man by Manx Telecom, the operator then owned by British Telecom, and the first commercial network in Europe was opened for business by Telenor in December 2001 with no commercial handsets and thus no paying customers. These were both on the W-CDMA technology.
The first commercial United States 3G network was by Monet Mobile Networks, on CDMA2000 1x EV-DO technology, but this network provider later shut down operations. The second 3G network operator in the USA was Verizon Wireless in October 2003 also on CDMA2000 1x EV-DO, and this network has grown strongly since then.
The first pre-commercial demonstration network in the southern hemisphere was built in Adelaide, South Australia by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded as Three in March 2003.
In December 2007, 190 3G networks were operating in 40 countries and 154 HSDPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada and the USA, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks.
In Europe, mass market commercial 3G services were introduced starting in March 2003 by 3 (Part of Hutchison Whampoa) in the UK and Italy. The European Union Council suggested that the 3G operators should cover 80% of the European national populations by the end of 2005.
Roll-out of 3G networks was delayed in some countries by the enormous costs of additional spectrum licensing fees. In many countries, 3G networks do not use the same radio frequencies as 2G, so mobile operators must build entirely new networks and license entirely new frequencies; an exception is the United States where carriers operate 3G service in the same frequencies as other services. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. Other delays were due to the expenses of upgrading equipment for the new systems.
By June 2007 the 200 millionth 3G subscriber had been connected. Out of 3 billion mobile phone subscriptions worldwide this is only 6.7%. In the countries where 3G was launched first - Japan and South Korea - 3G penetration is over 70%.[11] In Europe the leading country is Italy with a third of its subscribers migrated to 3G. Other leading countries by 3G migration include UK, Austria, Australia and Singapore at the 20% migration level. A confusing statistic is counting CDMA 2000 1x RTT customers as if they were 3G customers. If using this definition, then the total 3G subscriber base would be 475 million at June 2007 and 15.8% of all subscribers worldwide.
Still, several developing countries have not awarded 3G licenses and customers await 3G services. China delayed its decisions on 3G for many years, mainly because of their Government's delay in establishing well defined standards.[12] China announced in May 2008, that the telecoms sector was re-organized and three 3G networks would be allocated so that the largest mobile operator, China Mobile, would retain its GSM customer base. China Unicom would retain its GSM customer base but relinquish its CDMA2000 customer base, and launch 3G on the globally leading WCDMA (UMTS) standard. The CDMA2000 customers of China Unicom would go to China Telecom, which would then launch 3G on the CDMA 1x EV-DO standard. This meant that China would have all three main cellular technology 3G standards in commercial use. Finally in January 2009, Ministry of industry and Information Technology of China has awarded licenses of all three standards,TD-SCDMA to China Mobile, WCDMA to China Unicom and CDMA2000 to China Telecom.
In November 2008, Turkey has auctioned four IMT 2000/UMTS standard 3G licenses with 45, 40, 35 and 25 MHz top frequencies. Turkcell has won the 45MHz band with its €358 million offer followed by Vodafone and Avea leasing the 40 and 35MHz frequencies respectively for 20 years. The 25MHz top frequency license remains to be auctioned.
The first African use of 3G technology was a 3G videocall made in Johannesburg on the Vodacom network in November 2004. The first commercial launch of 3G in Africa was by EMTEL in Mauritius on the W-CDMA standard. In north African Morocco in late March 2006, a 3G service was provided by the new company Wana.
Telus first introduced 3G services in Canada in 2005. Rogers Wireless began implementing 3G HSDPA services in eastern Canada early 2007 in the form of Rogers Vision. Fido Solutions and Rogers Wireless now offer 3G service in most urban centres.
T-Mobile, a major Telecommunication services provider has recently rolled out a list of over 120 U.S. cities which will be provided with 3G Network coverage in the year 2009.[13]
In 2008, India entered into 3G Mobile arena with the launch of 3G enabled Mobile services by Mahanagar Telephone Nigam Limited (MTNL). MTNL is the first Mobile operator in India to launch 3G services.
Features
Data rates
ITU has not provided a clear definition of the data rate users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle,"[14] the ITU does not actually clearly specify minimum or average rates or what modes of the interfaces qualify as 3G, so various rates are sold as 3G intended to meet customers expectations of broadband data.
Security
3G networks offer a greater degree of security than 2G predecessors. By allowing the UE to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block crypto instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified[citation needed].
In addition to the 3G network infrastructure security, end to end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property.
Evolution from 2G
2G networks were built mainly for voice services and slow data transmission.
From 2G to 2.5G
The first major step in the evolution to 3G occurred with the introduction of General Packet Radio Service (GPRS). So the cellular services combined with GPRS became' 2.5G.'
GPRS could provide data rates from 56 kbit/s up to 114 kbit/s. It can be used for services such as Wireless Application Protocol (WAP) access, Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access. GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is utilizing the capacity or is in an idle state.
From 2.5G to 2.75G (EDGE)
GPRS networks evolved to EDGE networks with the introduction of 8PSK encoding. Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM. EDGE can be considered a 3G radio technology and is part of ITU's 3G definition, but is most frequently referred to as 2.75G. EDGE was deployed on GSM networks beginning in 2003—initially by Cingular (now AT&T) in the United States.
EDGE is standardized by 3GPP as part of the GSM family, and it is an upgrade that provides a potential three-fold increase in capacity of GSM/GPRS networks. The specification achieves higher data-rates by switching to more sophisticated methods of coding (8PSK), within existing GSM timeslots.
Evolution towards 4G
Both 3GPP and 3GPP2 are currently working on further extensions to 3G standards, named Long Term Evolution and Ultra Mobile Broadband, respectively. Being based on an all-IP network infrastructure and using advanced wireless technologies such as MIMO, these specifications already display features characteristic for IMT-Advanced (4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre-4G.
3GPP plans to meet the 4G goals with LTE Advanced, whereas Qualcomm has halted development of UMB in favour of the LTE family.[5]
Issues
Although 3G was successfully introduced to users across the world, some issues are debated by 3G providers and users:
• Expensive input fees for the 3G service licenses in some jurisdictions
• Differences in licensing terms between states
• Level of debt incurred by some telecommunication companies, which makes investment in 3G difficult
• Lack of state support for financially troubled operators
• Cost of 3G phones
• Lack of coverage in some areas
• Demand for broadband services in a hand-held device
• Battery life of 3G phones
References:
• http://wikipedia.org
• http://www.google.co.in
• http://www.wikinvest.com
• http://www.mobilein.com
Subscribe to:
Posts (Atom)
Unique Cities to Visit in Assam, India
Assam is a beautiful state in northeastern India, with a diverse mix of cultures, landscapes, and attractions. Here are some unique cities t...
-
WiMax (Worldwide Interoperability for Microwave Access) is a wireless broadband technology, which supports point to multi-point (PMP) broadb...
-
The Tinsukia district of Assam's small town of Bordubi is home to the Tilinga Mandir, often known as the Bell Temple. The Lord Shiva dei...
-
The first oil well ever to be sunk in Asia was in Digboi, Tinsukia district in the northeastern region of the Indian state of Assam. Digb...