download angry ip scanner here
click here to download
Archives
-
▼
2008
(17)
-
►
September
(16)
- 'Google goggle' can find missing keys
- IP ADDDRESS SPOOFING
- IP blocking
- IP address:
- beware while chattin on yahoo
- Todays updates
- MACRO ASSEMBLER (MASM 6.X) FREE DOWNLOAD
- SKYPE
- LHC experiment delayed for two months
- EFFECTS OF VIOLENCE IN VIDEO GAMES
- COMPUTER FORENSICS
- LAST WEEKUPDATES
- ACCIDENTAL DISCOVERIES.
- EARN MORE:
- DOWNLOADS:E BOOKS
- FACT:LARGE HADRON COLLIDOR
-
►
September
(16)
LONDON: Soon, you won’t have to go through the painstaking and irritating task of searching for your missing car keys, thanks to “Smart Goggle”, whi
ch can track down any misplaced item. A team of Japanese scientists, led by Yasuo Kuniyoshi at the Tokyo University School of Information Science and Technology, has come up with a secretive artificial intelligence project codenamed Smart Goggle, which they claim can help search anything from a remote control, to mobile phone or iPod. According to the scientists, one just needs to tell the glasses what he or she is searching for. Following the voice command, the Smart Goggle plays into their eye a video of the last few seconds they saw that missing item. A small camera rests on the glasses making constant record of everything the wearer sees. The tiny display inside the glasses spots what is being checked and a small readout immediately announces what the computer thinks the object most likely is. Professor Kuniyoshi said that the extraordinary property of the glasses doesn’t lie in its hardware, but the computer algorithm that allows the goggles to know instantly what they are seeing. He said that if the wearer roams around the house for about an hour telling the goggles the name of everything from a coathanger to the kitchen sink, they would retain the information. And if at some point in the future, the wearer asks them where they last saw a particular item, they will play the suitable footage, reports Times Online . Kuniyoshi describes his goggles as the ultimate link between the real world and the cyber world and maintains that his invention could finally be loaded with vast quantities of data from the internet. With the huge database installed, the glasses might actually know much more about what the wearer is seeing than the wearer himself - species of animal, technical specifications of vehicles and electronics, or even the identity of people. In a demonstration, the professor showed how the user might, for example, gaze at a selection of unknown flowers and the glasses would say which were begonias, which were ferns and which were pansies. Although the experimental model is still too bulky for daily use, the team at the Tokyo University School of Information Science and Technology are confident that it can soon be miniaturised. It could even, they suggest, be small enough to look little different from a normal pair of glasses. But unfortunately, of course, there is one irritating question they would not be able to answer: “Now where did I put my glasses?”
The basic protocol for sending data over the Internet and many other computer networks is the Internet Protocol ("IP"). The header of each IP packet contains, among other things, the numerical source and destination address of the packet. The source address is normally the address that the packet was sent from. By forging the header so it contains a different address, an attacker can make it appear that the packet was sent by a different machine. The machine that receives spoofed packets will send response back to the forged source address, which means that this technique is mainly used when the attacker does not care about response or the attacker has some way of guessing the response.
In certain cases, it might be possible for the attacker to see or redirect the response to his own machine. The most usual case is when the attacker is spoofing an address on the same LAN or WAN.
IP spoofing is most frequently used in denial-of-service attacks. In such attacks, the goal is to flood the victim with overwhelming amounts of traffic, and the attacker does not care about receiving responses to his attack packets. Packets with spoofed addresses are thus suitable for such attacks. They have additional advantages for this purpose - they are more difficult to filter since each spoofed packet appears to come from a different address, and they hide the true source of the attack. Denial of service attacks that use spoofing typically randomly choose addresses from the entire IP address space, though more sophisticated spoofing mechanisms might avoid unroutable addresses or unused portions of the IP address space. The proliferation of large botnets makes spoofing less important in denial of service attacks, but attackers typically have spoofing available as a tool, if they want to use it, so defenses against denial-of-service attacks that rely on the validity of the source IP address in attack packets might have trouble with spoofed packets. Backscatter, a technique used to observe denial-of-service attack activity in the Internet, relies on attackers' use of IP spoofing for its effectiveness.
IP spoofing can also be a method of attack used by network intruders to defeat network security measures, such as authentication based on IP addresses. This method of attack on a remote system can be extremely difficult, as it involves modifying thousands of packets at a time. This type of attack is most effective where trust relationships exist between machines. For example, it is common on some corporate networks to have internal systems trust each other, so that a user can log in without a username or password provided he is connecting from another machine on the internal network (and so must already be logged in). By spoofing a connection from a trusted machine, an attacker may be able to access the target machine without authenticating.
Configuration and services that are vulnerable to IP spoofing :
RPC (Remote Procedure Call services)
Any service that uses IP address authentication
The X Window system
The R services suite (rlogin, rsh, etc.)
Defense against spoofing
Packet filtering is one defense against IP spoofing attacks. The gateway to a network usually performs ingress filtering, which is blocking of packets from outside the network with a source address inside the network. This prevents an outside attacker spoofing the address of an internal machine. Ideally the gateway would also perform egress filtering on outgoing packets, which is blocking of packets from inside the network with a source address that is not inside. This prevents an attacker within the network performing filtering from launching IP spoofing attacks against external machines.
It is also recommended to design network protocols and services so that they do not rely on the IP source address for authentication
Upper layers
Some upper layer protocols provide their own defense against IP spoofing. For example, Transmission Control Protocol (TCP) uses sequence numbers negotiated with the remote machine to ensure that arriving packets are part of an established connection. Since the attacker normally can't see any reply packets, he has to guess the sequence number in order to hijack the connection. The poor implementation in many older operating systems and network devices, however, means that TCP sequence numbers can be predicted
SOURCE:
IP blocking prevents the connection between a computer or network and certain IP addresses or ranges of addresses. IP blocking effectively bans undesired connections from those computers to a website, mail server, or other Internet server.
IP banning is commonly used on computer servers to protect against brute force attacks. Both companies and schools offering remote user access, and people wanting to access their home computers from remote locations, use Linux programs such as BlockHosts, DenyHosts or Fail2ban for protection from unauthorized access while allowing permitted remote access.
It is also used for censorship. One example is the July 2003 decision by techfocus.org to ban the Recording Industry Association of America (RIAA) and Motion Picture Association of America (MPAA) from its website for various abuses by those two organisations of the content on it.
On an Internet forum or Web site an IP ban is often used as a last resort to prevent a disruptive member from access, though a warning and/or account ban may be used first. Dynamic allocation of IP addresses can complicate incoming IP blocking, rendering it difficult to block a specific user without blocking a larger number of IP addresses, thereby risking collateral damage caused by ISPs sharing IP addresses of multiple internet users.
IP Blocking of the Showtime website for non-US origins
IP banning is also used to limit the syndication of content to a specific region. To achieve this IP-addresses are mapped to the countries they have been assigned to.
Proxy servers can be used to bypass an IP ban unless the site being accessed has an effective anti-proxy script.
IP Addresses
In order for systems to locate each other in a distributed environment, nodes are given explicit addresses that uniquely identify the particular network the system is on and uniquely identify the system to that particular network. When these two identifiers are combined, the result is a globally-unique address.
This address, known as “IP address”, as “IP number”, or merely as “IP” is a code made up of numbers separated by three dots that identifies a particular computer on the Internet. These addresses are actually 32-bit binary numbers, consisting of the two subaddresses (identifiers) mentioned above which, respectively, identify the network and the host to the network, with an imaginary boundary separating the two. An IP address is, as such, generally shown as 4 octets of numbers from 0-255 represented in decimal form instead of binary form.
For example, the address 168.212.226.204 represents the 32-bit binary number 10101000.11010100.11100010.11001100.
The binary number is important because that will determine which class of network the IP address belongs to. The Class of the address determines which part belongs to the network address and which part belongs to the node address (see IP address Classes further on).
The location of the boundary between the network and host portions of an IP address is determined through the use of a subnet mask. This is another 32-bit binary number which acts like a filter when it is applied to the 32-bit IP address. By comparing a subnet mask with an IP address, systems can determine which portion of the IP address relates to the network and which portion relates to the host. Anywhere the subnet mask has a bit set to “1”, the underlying bit in the IP address is part of the network address. Anywhere the subnet mask is set to “0”, the related bit in the IP address is part of the host address.
The size of a network is a function of the number of bits used to identify the host portion of the address. If a subnet mask shows that 8 bits are used for the host portion of the address block, a maximum of 256 host addresses are available for that specific network. If a subnet mask shows that 16 bits are used for the host portion of the address block, a maximum of 65,536 possible host addresses are available for use on that network.
An Internet Service Provider (ISP) will generally assign either a static IP address (always the same) or a dynamic address (changes every time one logs on).
ISPs and organizations usually apply to the InterNIC for a range of IP addresses so that all clients have similar addresses.
There are about 4.3 billion IP addresses. The class-based, legacy addressing scheme places heavy restrictions on the distribution of these addresses.
TCP/IP networks are inherently router-based, and it takes much less overhead to keep track of a few networks than millions of them.
IP Classes
Class A addresses always have the first bit of their IP addresses set to “0”. Since Class A networks have an 8-bit network mask, the use of a leading zero leaves only 7 bits for the network portion of the address, allowing for a maximum of 128 possible network numbers, ranging from 0.0.0.0 – 127.0.0.0. Number 127.x.x.x is reserved for loopback, used for internal testing on the local machine.
Class B addresses always have the first bit set to “1” and their second bit set to “0”. Since Class B addresses have a 16-bit network mask, the use of a leading “10” bit-pattern leaves 14 bits for the network portion of the address, allowing for a maximum of 16,384 networks, ranging from 128.0.0.0 – 181.255.0.0.
Class C addresses have their first two bits set to “1” and their third bit set to “0”. Since Class C addresses have a 24-bit network mask, this leaves 21 bits for the network portion of the address, allowing for a maximum of 2,097,152 network addresses, ranging from 192.0.0.0 – 223.255.255.0.
Class D addresses are used for multicasting applications. Class D addresses have their first three bits set to “1” and their fourth bit set to “0”. Class D addresses are 32-bit network addresses, meaning that all the values within the range of 224.0.0.0 – 239.255.255.255 are used to uniquely identify multicast groups. There are no host addresses within the Class D address space, since all the hosts within a group share the group’s IP address for receiver purposes.
Class E addresses are defined as experimental and are reserved for future testing purposes. They have never been documented or utilized in a standard way.
IP VERSIONS
The Internet Protocol (IP) has two versions currently in use (see IP version history for details). Each version has its own definition of an IP address. Because of its prevalence, "IP address" typically refers to those defined by IPv4.
IPv4 uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 (232) possible unique addresses. However, IPv4 reserves some addresses for special purposes such as private networks (~18 million addresses) or multicast addresses (~270 million addresses). This reduces the number of addresses that can be allocated as public Internet addresses, and as the number of addresses available is consumed, an IPv4 address shortage appears to be inevitable in the long run. This limitation has helped stimulate the push towards IPv6, which is currently in the early stages of deployment and is currently the only contender to replace IPv4.
IPv4 addresses are usually represented in dotted-decimal notation (four numbers, each ranging from 0 to 255, separated by dots, e.g. 147.132.42.18). Each part represents 8 bits of the address, and is therefore called an octet. It is possible, although less common, to write IPv4 addresses in binary or hexadecimal. When converting, each octet is treated as a separate number. (So 255.255.0.0 in dot-decimal would be FF.FF.00.00 in hexadecimal.)
IPv4 address networks
In the early stages of development of the Internet protocol,network administrators interpreted IP addresses as structures of network numbers and host numbers, with the highest order octet (first eight bits) of an IP address designating the "network number", and the rest of the bits (called the "rest" field) used for host numbering within a network. This method soon proved inadequate as local area networks developed that were not part of the larger networks already designated by a network number. In 1981 IP protocol specification was revised with the introduction of the classful network architecture.
Classful network design allowed for a larger number of individual allocations. The first three bits of the most significant octet of an IP address came to imply the "class" of the address instead of just the network number and, depending on the class derived, the network designation was based on octet boundary segments of the entire address. The following table gives an overview of this system.
When someone manually configures a computer to use the same IP address each time it powers up, this is known as a Static IP address. In contrast, in situations when the computer's IP address is assigned automatically, it is known as a Dynamic IP address.
Method of assignment
Static IP addresses get manually assigned to a computer by an administrator. The exact procedure varies according to platform. This contrasts with dynamic IP addresses, which are assigned either randomly (by the computer itself, as in Zeroconf), or assigned by a server using Dynamic Host Configuration Protocol (DHCP). Even though IP addresses assigned using DHCP may stay the same for long periods of time, they can generally change. In some cases, a network administrator may implement dynamically assigned static IP addresses. In this case, a DHCP server is used, but it is specifically configured to always assign the same IP address to a particular computer, and never to assign that IP address to another computer. This allows static IP addresses to be configured in one place, without having to specifically configure each computer on the network in a different way.
In the absence of both an administrator (to assign a static IP address) and a DHCP server, the operating system may assign itself an IP address using state-less autoconfiguration methods, such as Zeroconf. These IP addresses, known as link-local addresses, default to the 169.254.0.0/16 address range in IPv4.
Uses of dynamic addressing
Dynamic IP addresses are most frequently assigned on LANs and broadband networks by Dynamic Host Configuration Protocol (DHCP) servers. They are used because it avoids the administrative burden of assigning specific static addresses to each device on a network. It also allows many devices to share limited address space on a network if only some of them will be online at a particular time. In most current desktop operating systems, dynamic IP configuration is enabled by default so that a user does not need to manually enter any settings to connect to a network with a DHCP server. DHCP is not the only technology used to assigning dynamic IP addresses. Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol.
Uses of static addressing
Some infrastructure situations have to use static addressing, such as when finding the Domain Name Service directory host that will translate domain names to IP addresses. Static addresses are also convenient, but not absolutely necessary, to locate servers inside an enterprise. An address obtained from a DNS server comes with a time to live, or caching time, after which it should be looked up to confirm that it has not changed. Even static IP addresses do change as a result of network administration.
is only for education purpose.So who ever try this is at his risk.I am not sure that this will work 100 %.But yes will work almost 70 percent of the times.But before that you need to know some few things of yahoo chat protocolleave a comment here after u see the post lemme know if it does works or not or u havin a problem post hereFollowing are the features : -1) When we chat on yahoo every thing goes through the server.Only when we chat thats messages.2) When we send files yahoo has 2 optionsa) Either it upload the file and then the other client has to down load it.
b) Either it connects to the client directly and gets the files3) When we use video or audio:-a) It either goes thru the server
b) Or it has client to client connectionAnd when we have client to client connection the opponents IP is revealed.On the 5051 port.So how do we exploit the Chat user when he gets a direct connection. And how do we go about it.Remeber i am here to hack a system with out using a TOOL only by simple net commands and yahoo chat techniques.Thats what makes a difference between a real hacker and newbies.
1) Its impossible to get a Attackers IP address when you only chat.
2) There are 50 % chances of getting a IP address when you send files
3) Again 50 % chances of getting IP when you use video or audio.So why to wait lets exploit those 50 % chances.
steps
1) Go to dos type ->netstat -n You will get the following output.Just do not care and be coolActive ConnectionsProto Local Address Foreign Address StateTCP 194.30.209.15:1631 194.30.209.20:5900 ESTABLISHEDTCP 194.30.209.15:2736 216.136.224.214:5050 ESTABLISHEDTCP 194.30.209.15:2750 64.4.13.85:1863 ESTABLISHEDTCP 194.30.209.15:2864 64.4.12.200:1863 ESTABLISHEDActive ConnectionsProto Local Address Foreign Address StateTCP 194.30.209.15:1631 194.30.209.20:5900 ESTABLISHEDTCP 194.30.209.15:2736 216.136.224.214:5050 ESTABLISHEDTCP 194.30.209.15:2750 64.4.13.85:1863 ESTABLISHEDTCP 194.30.209.15:2864 64.4.12.200:1863 ESTABLISHEDJust i will explain what the out put is in general.In left hand side is your IP address.And in right hand side is the IP address of the foreign machine.And the port to which is connected.Ok now so what next ->2) Try sending a file to the Target .if the files comes from server.Thats the file is uploaded leave itYou will not get the ip.But if a direct connection is establishedHMMMM then the first attacker first phase is overThis is the output in your netstat.The 5101 number port is where the Attacker is connected.Active ConnectionsProto Local Address Foreign Address StateTCP 194.30.209.15:1631 194.30.209.20:5900 ESTABLISHEDTCP 194.30.209.15:2736 216.136.224.214:5050 ESTABLISHEDTCP 194.30.209.15:2750 64.4.13.85:1863 ESTABLISHEDTCP 194.30.209.15:2864 64.4.12.200:1863 ESTABLISHEDTCP 194.30.209.15:5101 194.30.209.14:3290 ESTABLISHEDThats what is highlighted in RED. So what next
3) Ok so make a DOS attack nowGo to dos prompt andJust donbtstat -A Attackers IPaddress.Can happen that if system is not protected then you can see the whole network.C:\>nbtstat -A 194.30.209.14Local Area Connection:Node IpAddress: [194.30.209.15] Scope Id: []NetBIOS Remote Machine Name TableName Type Status---------------------------------------------EDP12 <00> UNIQUE RegisteredSHIV <00> GROUP RegisteredSHIV <20> UNIQUE RegisteredSHIVCOMP1 <1e> GROUP RegisteredMAC Address = 00-C0-W0-D5-EF-9AOk so you will ask now what next.No you find what you can do with this network than me explaining everything.So the conclusion is never exchange files , video or audio till you know that the user with whom you are chatting is not going to harm you.
DOWNLOAD MASM 6.x, an assembler which simulates x8086 processor. just copy the contents of the folder c:\system32, afdebugger window included
DOWNLOAD:
SkypeInSkypeIn allows Skype users to receive calls on their computers dialed by regular phone subscribers to a local Skype phone number; local numbers are available for Australia, Brazil, Chile,[4] Denmark, the Dominican Republic, Estonia, Finland, France, Germany, Hong Kong, Ireland, Japan, Mexico, New Zealand,Poland, Romania, South Korea, India, Sweden, Switzerland, UK, and the United States. A Skype user can have local numbers in any of these countries, with calls to the number charged at the same rate as calls to fixed lines in the country. Some jurisdictions, including France and Norway, forbid the registration of their telephone numbers to anyone without a physical presence or citizenship in the country.
VideoconferencingVideoconferencing was introduced in January 2006 for the Windows and Mac OS X platform clients. Skype 2.0 for Linux, which was released on March 13, 2008, also features support for videoconferencing.Skype for Windows, starting with version 3.6.0.216, supports “High Quality Video" with quality and features (e.g. full-screen and screen-in-screen modes) similar to that of mid-range videoconferencing systems.
[edit] Skype on mobile devicesOn April 24, 2008, Skype announced that they offer Skype on around 50 mobile phones.On October 29, 2007, Skype launched its own mobile phone under the brand name 3 Skypephone, which runs a BREW OS.[8]Skype is available for the N800 and N810 Internet Tablets.Skype is available on both the Sony Mylo COM-1 and newer COM-2 models.Skype is available for the PSP (PlayStation Portable) Slim and Lite with firmware version 3.90 or higher, but you need to purchase one of three microphone input peripherals. The first is the Skype headset kit, which comes with a headset with a boom microphone and the PSP remote, but in the colour black instead of the standard silver. The other two which plug in to the proprietary USB accessory connector at the top being the dedicated microphone peripheral or the PSP camera which also has a built in microphone.The Upcoming PSP-3000 has a built in microphone which allows communication without the Skype peripheral [9]Skype is available on mobile devices running Windows Mobile.The official Symbian version is currently under development.Official Skype support is available on Symbian and Java as part of X-Series together with mobile operator 3.Other companies produce dedicated Skype phones which connect via WiFi. Third party developers, such as Nimbuzz and Fring, have allowed Skype to run in parallel with several other competing VoIP/IM networks in any Symbian or Java environment. Nimbuzz have made Skype available to BlackBerry users.
Security featuresMain article: Skype securitySecure communication is a feature of Skype; encryption cannot be disabled, and is invisible to the user. Skype reportedly uses non-proprietary, widely trusted encryption techniques: RSA for key negotiation and the Advanced Encryption Standard to encrypt conversations.Skype provides an uncontrolled registration system for users with absolutely no proof of identity. This permits users to use the system without revealing their identity to other users. It is trivial, of course, for anybody to set up an account using any name; the displayed caller's name is no guarantee of authenticity.
Issues
Security concerns
Skype protocolMain article: Skype ProtocolSkype uses a proprietary Internet telephony (VoIP) network. The protocol has not been made publicly available by Skype and official applications using the protocol are proprietary and closed-source. The main difference between Skype and standard VoIP clients is that Skype operates on a peer-to-peer model (originally based on the Kazaa software) rather than the more usual client-server model. The Skype user directory is entirely decentralized and distributed among the nodes of the network—i.e., users' computers—which allows the network to scale very easily to large sizes (currently about 240 million users) without a complex centralized infrastructure costly to the Skype Group.Skype Protocol DetectionMany Networking and security companies claim to detect and control Skype's protocol for enterprise and carrier applications. While the specific detection methods used by these companies are often proprietary, Pearson's Chi-Square Test and stochastic characterization with Naive Bayesian Classifiers are two approaches that were publicly published in 2007.
SOURCE:
Incident in LHC sector 34
Geneva, 20 September 2008. During commissioning (without beam) of the final LHC sector (sector 34) at high current for operation at 5 TeV, an incident occurred at mid-day on Friday 19 September resulting in a large helium leak into the tunnel. Preliminary investigations indicate that the most likely cause of the problem was a faulty electrical connection between two magnets, which probably melted at high current leading to mechanical failure. CERN ’s strict safety regulations ensured that at no time was there any risk to people.
A full investigation is underway, but it is already clear that the sector will have to be warmed up for repairs to take place. This implies a minimum of two months down time for LHC operation. For the same fault, not uncommon in a normally conducting machine, the repair time would be a matter of days.
Further details will be made available as soon as they are known.
1 CERN, the European Organization for Nuclear Research, is the world's leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. India, Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer status.
With the advent of Information technology, the face of the world changed to a very large extent. What was thought to be a time consuming work could be finished consuming very less time. On the other side, unless kept under a limit, it has a lot of nuisances. One of the nuisance which created controversies all over the globe is that of video games. Though it looks pretty harmless, it sure has a large amount of lethal consequences. One of them is that playing video games, pc games as well results in waste of time which could have been used for individual development. Impact of video games is found to be very high among youths, especially teenagers. After students Dylan Klebold and Eric Harris opened fire in their Colorado high school in 1999 -- shooting 20 people and killing 13 -- Linda Sanders filed a lawsuit. Her husband was a teacher at Columbine and among the dead. The media revealed that Harris and Klebold played a lot of violent video games, including "Wolfenstein 3D," "Doom," and "Mortal Kombat." Sanders named multiple video game publishers, including Sony and Nintendo, in the suit as well as Time Warner and Palm Pictures since the shooters had apparently watched "The Basketball Diaries," in which a character uses a shotgun to kill students at his high school. In today's ultra-violent media world, it appears there's plenty of blame to go around. But is it legitimate?
As of 2001, roughly 79 percent of America's youth play video games, many of them for at least eight hours a week (!!!)[source: National Institute on Media and the Family]. Beyond the obvious issues of concern, like "what happened to riding bikes around the neighborhood," there are bigger questions. Many people wonder how this type of exposure to violence as an adolescent effects social behavior. The rise in dramatically violent shootings by teenagers, many of whom apparently play violent video games, is helping the argument that video game violence translates into real-world situations. But other people aren't convinced and insist that video games are a scapegoat for a shocking social trend that has people scared and looking to place blame. Entertainment media has always made a great scapegoat: In the 1950s, lots of people blamed comic books for kids' bad behavior [source: CBS News].
Video games as we now know them are only about 20 years old, so there's nowhere near the amount of empirical evidence for or against their violent effects than there is surrounding, say, television violence. And even that's not a done deal.
So what exactly does science have to say about violent video games? Is there any evidence that shows a cause-effect relationship between shooting people in a game and shooting people in real life? On the next page, we'll see what the studies say.
Studies on Video Game Violence
In 2006, an 18-year-old named Devin Moore was arrested in Alabama on suspicion of car theft. The police officers brought him into the station and started booking him without any trouble. Minutes later, Moore attacked one police officer, stole his gun, shot him and another officer and then fled down the hall and shot a 9-1-1 dispatcher in the head. He then grabbed a set of car keys on his way out the back door, got in a police car and drove away.
Joe Raedle/Newsmakers/Getty ImagesTwo teen boys play Time Crisis II at an arcade. The Federal Trade Commission released a report stating the movie, video game and music industries aggressively market products that carry adult ratings to underage youths.
Moore had no criminal history. According to the lawsuit filed against video game companies after the incident, Moore had been playing a lot of Grand Theft Auto before the killings [source: CBS News]. At least on the surface, the connection between Moore's game play and his real actions is logical: In "Grand Theft Auto," players steal cars and kill cops.
But the argument is an old one. We've heard it for decades about violent TV. Science has come to a general consensus that violent TV does have an effect on kids' behavior, although doesn't say it causes children to act out the violence they see on the screen.
The basic claim in the video-game controversy is that video games are even more likely to affect people's behavior than TV because they're immersive. People don't just watch video games; they interact with them. The games are also repetitive and based on a rewards system. Repetition and rewards are primary components of classical conditioning, a proven psychological concept in which behavioral learning takes place as a result of rewarding (or punishing) particular behaviors. Also, since the brains of children and teens are still developing, they would, in theory, be even more susceptible to this type of "training."
There's some evidence to this effect, including a study reported in the journal "Psychological Science" in 2001. The report is an overall analysis of 35 individual studies on video game violence. It found several common conclusions, including:
· Children who play violent video games experience an increase in physiological signs of aggression. According to the authors behind the meta-analysis, when young people are playing a violent video game, their blood pressure and heart rate increases, and "fight or flight" hormones like adrenaline flood the brain. The same thing happens when people are in an actual, physical fight. One study even showed a difference in physical arousal between a bloody version of "Mortal Kombat" (a fight-to-the-death game) and a version with the blood turned off.
· Children who play violent video games experience an increase in aggressive actions. A 2000 study involving college students yielded interesting results. The study had two components: a session of video-game play, in which half the students played a violent video game and half played a non-violent video game, and then a simple reaction-time test that put two of the students in head-to-head competition. Whoever won the reaction-time test got to punish the loser with an audio blast. Of the students who won the reaction-time test, the ones who'd been playing a violent video game delivered longer, louder audio bursts to their opponents.
One of the most recent studies, conducted in 2006 at the Indiana University School of Medicine, went right to the source. Researchers scanned the brains of 44 kids immediately after they played video games. Half of the kids played "Need for Speed: Underground," an action racing game that doesn't have a violent component. The other half played "Medal of Honor: Frontline," an action game that includes violent first-person shooter activity (the game revolves around the player's point of view). The brain scans of the kids who played the violent game showed increased activity in the amygdala, which stimulates emotions, and decreased activity in the prefrontal lobe, which regulates inhibition, self-control and concentration. These activity changes didn't show up on the brain scans of the kids playing "Need for Speed."
If so much evidence points to a relationship between virtual aggression and real-world aggression, why are impressionable kids still playing "Mortal Kombat?" On the next page, we'll see why the issue isn't quite so cut and dry
Controversy on Video Game Violence
In science, correlation doesn't imply causation. A relationship between virtual aggression and real-life aggression isn't necessarily one of cause and effect. Maybe bullies in real life also enjoy being bullies in virtual life, so they play violent video games.
To date, all lawsuits against video game companies for distributing violent content have been thrown out. In the Sanders lawsuit over the Columbine tragedy, the judge found that neither Nintendo nor Sony could've anticipated the shocking actions of Harris and Klebold. The First Amendment fully protects the companies' right to distribute games -- regardless of content.
There's debate over the correlation between video game violence and real-life violence. Maybe bullies just like to play violent games
David Walsh of the National Institute on Media and Family disagrees, and noted that in some analytical studies, children who were determined to be inherently non-hostile actually showed a greater increase in real-world aggression than their hostile counterparts [source: National Institute on Media and the Family]. But the analysis of a collection of small studies isn't considered scientific proof. It's merely a suggestion of a trend. And for many people, that's just not enough.
The small test groups and lack of long-term studies casts a shadow on the body of evidence against violent video games. Many people believe video games offer no more exposure to violence than television shows featuring murder, not to mention movies that graphically depict serial killers and war.
Other primary arguments against a cause-effect relationship between game violence and real-life violence focus on much wider trends than the occasional horrific school shooting. Some experts point to the fact that while violent video game sales are on the rise, violent crime rates in the United States are going down [source: LiveScience]
However, the Missouri State Correctional System isn't taking any chances. As of 2004, convicted violent offenders in Missouri no longer have access to games like "Grand Theft Auto" and "Hitman: Contracts" (in which players get paid to kill people with weapons like meat hooks). And Missouri's not alone in its decision. Some retailers now refuse to sell violent "rated M" (mature) games to kids under 18. The video game industry itself is attempting to self-regulate against publishers marketing "rated M" games to children.
The controversy is far from over. But concern over the potential anti-social effects of violent games isn't affecting sales -- or at least not in the direction activists might hope for. The Associated Press reported in March 2008 that video game sales -- hardware and software combined -- reached $1.33 billion in February [source: NYT]. That's for the month, not the quarter, and it's 34 percent higher than January 2008 sales. With Grand Theft Auto IV due out in April, sales are expected to spike again. As AP reports, the game's publisher says that pre-orders have surpassed projections.
The field of computer forensics is relatively young. In the early days of computing, courts considered evidence from computers to be no different from any other kind of evidence. As computers became more advanced and sophisticated, opinion shifted -- the courts learned that computer evidence was easy to corrupt, destroy or change.
Usually, detectives have to secure a warrant to search a suspect's computer for evidence. The warrant must include where detectives can search and what sort of evidence they can look for. In other words, a detective can't just serve a warrant and look wherever he or she likes for anything suspicious. In addition, the warrant's terms can't be too general. Most judges require detectives to be as specific as possible when requesting a warrant.
For this reason, it's important for detectives to research the suspect as much as possible before requesting a warrant. Consider this example: A detective secures a warrant to search a suspect's laptop computer. The detective arrives at the suspect's home and serves the warrant. While at the suspect's home, the detective sees a desktop PC. The detective can't legally search the PC because it wasn't included in the original warrant.
Every computer investigation is somewhat unique. Some investigations might only require a week to complete, but others could take months. Here are some factors that can impact the length of an investigation:
· The expertise of the detectives
· The number of computers being searched
· The amount of storage detectives must sort through (hard drives, CDs, DVDs and thumb drives)
· Whether the suspect attempted to hide or delete information
What are the steps in collecting evidence from a computer? Keep reading to find out.
In Plain View
The plain view doctrine gives detectives the authority to gather any evidence that is in the open while conducting a search. If the detective in our example saw evidence of a crime on the screen of the suspect's desktop PC, then the detective could use that as evidence against the suspect and search the PC even though it wasn't covered in the original warrant. If the PC wasn't turned on, then the detective would have no authority to search it and would have to leave it alone.
Phases of a Computer Forensics Investigation
Judd Robbins, a computer scientist and leading expert in computer forensics, lists the following steps investigators should follow to retrieve computer evidence:
1.Secure the computer system to ensure that the equipment and data are safe. This means the detectives must make sure that no unauthorized individual can access the computers or storage devices involved in the search. If the computer system connects to the Internet, detectives must sever the connection.
2.Find every file on the computer system, including files that are encrypted, protected by passwords, hidden or deleted, but not yet overwritten. Investigators should make a copy of all the files on the system. This includes files on the computer's hard drive or in other storage devices. Since accessing a file can alter it, it's important that investigators only work from copies of files while searching for evidence. The original system should remain preserved and intact.
3.Recover as much deleted information as possible using applications that can detect and retrieve deleted data.
4.Reveal the contents of all hidden files with programs designed to detect the presence of hidden data.
5.Decrypt and access protected files.
6.Analyze special areas of the computer's disks, including parts that are normally inaccessible. (In computer terms, unused space on a computer's drive is called unallocated space. That space could contain files or parts of files that are relevant to the case.)
7.Document every step of the procedure. It's important for detectives to provide proof that their investigations preserved all the information on the computer system without changing or damaging it. Years can pass between an investigation and a trial, and without proper documentation, evidence may not be admissible. Robbins says that the documentation should include not only all the files and data recovered from the system, but also a report on the system's physical layout and whether any files had encryption or were otherwise hidden.
8.Be prepared to testify in court as an expert witness in computer forensics. Even when an investigation is complete, the detectives' job may not be done. They may still need to provide testimony in court [source: Robbins].
All of these steps are important, but the first step is critical. If investigators can't prove that they secured the computer system, the evidence they find may not be admissible. It's also a big job. In the early days of computing, the system might have included a PC and a few floppy disks. Today, it could include multiple computers, disks, thumb drives, external drives, peripherals and Web servers.
Not as Deleted as You Think
When you delete a file, your computer moves the file to a new directory. Once you empty your recycle bin, your computer makes a note that the space occupied by that file is available. The file remains there until the computer writes new data on that part of the drive. With the right software, you can retrieve deleted files as long as they haven't been overwritten.
Some criminals have found ways to make it even more difficult for investigators to find information on their systems. They use programs and applications known as anti-forensics. Detectives have to be aware of these programs and how to disable them if they want to access the information in computer systems.
What exactly are anti-forensics, and what's their purpose? Find out in the next section.
Anti-forensics can be a computer investigator's worst nightmare. Programmers design anti-forensic tools to make it hard or impossible to retrieve information during an investigation. Essentially, anti-forensics refers to any technique, gadget or software designed to hamper a computer investigation.
There are dozens of ways people can hide information. Some programs can fool computers by changing the information in files' headers. A file header is normally invisible to humans, but it's extremely important -- it tells the computer what kind of file the header is attached to. If you were to rename an mp3 file so that it had a .gif extension, the computer would still know the file was really an mp3 because of the information in the header. Some programs let you change the information in the header so that the computer thinks it's a different kind of file. Detectives looking for a specific file format could skip over important evidence because it looked like it wasn't relevant.
Other programs can divide files up into small sections and hide each section at the end of other files. Files often have unused space called slack space. With the right program, you can hide files by taking advantage of this slack space. It's very challenging to retrieve and reassemble the hidden information.
It's also possible to hide one file inside another. Executable files -- files that computers recognize as programs -- are particularly problematic. Programs called packers can insert executable files into other kinds of files, while tools called binders can bind multiple executable files together.
Encryption is another way to hide data. When you encrypt data, you use a complex set of rules called an algorithm to make the data unreadable. For example, the algorithm might change a text file into a seemingly meaningless collection of numbers and symbols. A person wanting to read the data would need the encryption's key, which reverses the encryption process so that the numbers and symbols would become text. Without the key, detectives have to use computer programs designed to crack the encryption algorithm. The more sophisticated the algorithm, the longer it will take to decrypt it without a key.
Other anti-forensic tools can change the metadata attached to files. Metadata includes information like when a file was created or last altered. Normally you can't change this information, but there are programs that can let a person alter the metadata attached to files. Imagine examining a file's metadata and discovering that it says the file won't exist for another three years and was last accessed a century ago. If the metadata is compromised, it makes it more difficult to present the evidence as reliable.
Some computer applications will erase data if an unauthorized user tries to access the system. Some programmers have examined how computer forensics programs work and have tried to create applications that either block or attack the programs themselves. If computer forensics specialists come up against such a criminal, they have to use caution and ingenuity to retrieve data.
A few people use anti-forensics to demonstrate how vulnerable and unreliable computer data can be. If you can't be sure when a file was created, when it was last accessed or even if it ever existed, how can you justify using computer evidence in a court of law? While that may be a valid question, many countries do accept computer evidence in court, though the standards of evidence vary from one country to another.
What exactly are the standards of evidence? We'll find out in the next section.
Standards of Computer Evidence
In the United States, the rules are extensive for seizing and using computer evidence. The U.S. Department of Justice has a manual titled "Searching and Seizing Computers and Obtaining Electronic Evidence in Criminal Investigations." The document explains when investigators are allowed to include computers in a search, what kind of information is admissible, how the rules of hearsay apply to computer information and guidelines for conducting a search.
Think Globally, Prosecute Locally
One challenge computer investigators face is that while computer crimes know no borders, laws do. What's illegal in one country may not be in another. Moreover, there are no standardized international rules regarding the collection of computer evidence. Some countries are trying to change that. The G8 group, which includes the United States, Canada, France, Germany, Great Britain, Japan, Italy and Russia, has identified six general guidelines regarding computer forensics. These guidelines concentrate on preserving evidence integrity.
If the investigators believe the computer system is only acting as a storage device, they usually aren't allowed to seize the hardware itself. This limits any evidence investigation to the field. On the other hand, if the investigators believe the hardware itself is evidence, they can seize the hardware and bring it to another location. For example, if the computer is stolen property, then the investigators could seize the hardware.
In order to use evidence from a computer system in court, the prosecution must authenticate the evidence. That is, the prosecution must be able to prove that the information presented as evidence came from the suspect's computer and that it remains unaltered.
Although it's generally acknowledged that tampering with computer data is both possible and relatively simple to do, the courts of the United States so far haven't discounted computer evidence completely. Rather, the courts require proof or evidence of tampering before dismissing computer evidence.
Another consideration the courts take into account with computer evidence is hearsay. Hearsay is a term referring to statements made outside of a court of law. In most cases, courts can't allow hearsay as evidence. The courts have determined that information on a computer does not constitute hearsay in most cases, and is therefore admissible. If the computer records include human-generated statements like e-mail messages, the court must determine if the statements can be considered trustworthy before allowing them as evidence. Courts determine this on a case-by-case basis.
Computer forensics experts use some interesting tools and applications in their investigations. Learn more about them in the next section.
This Whole Court is Out of Order
Vincent Liu, a computer security specialist, used to create anti-forensic applications. He didn't do it to hide his activities or make life more difficult for investigators. Instead, he did it to demonstrate that computer data is unreliable and shouldn't be used as evidence in a court of law. Liu is concerned that computer forensics tools aren't foolproof and that relying on computer evidence is a mistake [source: CSO].
Computer Forensics Tools
Programmers have created many computer forensics applications. For many police departments, the choice of tools depends on department budgets and available expertise.
©iStockphoto/Muharrem ÖnerNo matter how limited a department's budget is, no credible investigator would stoop to wrenching open a computer to find clues.
Here are a few computer forensics programs and devices that make computer investigations possible:
· Disk imaging software records the structure and contents of a hard drive. With such software, it's possible to not only copy the information in a drive, but also preserve the way files are organized and their relationship to one another.
· Software or hardware write tools copy and reconstruct hard drives bit by bit. Both the software and hardware tools avoid changing any information. Some tools require investigators to remove hard drives from the suspect's computer first before making a copy.
· Hashing tools compare original hard disks to copies. The tools analyze data and assign it a unique number. If the hash numbers on an original and a copy match, the copy is a perfect replica of the original.
· Investigators use file recovery programs to search for and restore deleted data. These programs locate data that the computer has marked for deletion but has not yet overwritten. Sometimes this results in an incomplete file, which can be more difficult to analyze.
· There are several programs designed to preserve the information in a computer's random access memory (RAM). Unlike information on a hard drive, the data in RAM ceases to exist once someone shuts off the computer. Without the right software, this information could be lost easily.
· Analysis software sifts through all the information on a hard drive, looking for specific content. Because modern computers can hold gigabytes of information, it's very difficult and time consuming to search computer files manually. For example, some analysis programs search and evaluate Internet cookies, which can help tell investigators about the suspect's Internet activities. Other programs let investigators search for specific content that may be on the suspect's computer system.
· Encryption decoding software and password cracking software are useful for accessing protected data.
These tools are only useful as long as investigators follow the right procedures. Otherwise, a good defense lawyer could suggest that any evidence gathered in the computer investigation isn't reliable. Of course, a few anti-forensics experts argue that no computer evidence is completely reliable.
Whether courts continue to accept computer evidence as reliable remains to be seen. Anti-forensics experts argue that it's only a matter of time before someone proves in a court of law that manipulating computer data without being detected is both possible and plausible. If that's the case, courts may have a hard time justifying the inclusion of computer evidence in a trial or investigation.
Phoning It In
Cell phones can contain important information on them. A cell phone is essentially a small computer. A few computer forensics vendors offer devices that can copy all the contents in a cell phone's memory and print up a comprehensive report. These devices retrieve everything from text messages to ring tones.
1. Play-Doh One smell most people remember from childhood is the odor of Play-Doh, the brightly-colored, nontoxic modeling clay. Play-Doh was accidentally invented in 1955 by Joseph and Noah McVicker while trying to make a wallpaper cleaner. It was marketed a year later by toy manufacturer Rainbow Crafts. More than 700 million pounds of Play-Doh have sold since then, but the recipe remains a secret.
2. Fireworks Fireworks originated in China some 2,000 years ago, and legend has it that they were accidentally invented by a cook who mixed together charcoal, sulfur, and saltpeter -- all items commonly found in kitchens in those days. The mixture burned and when compressed in a bamboo tube, it exploded. There's no record of whether it was the cook's last day on the job.
3. Potato Chips If you can't eat just one potato chip, blame it on chef George Crum. He reportedly created the salty snack in 1853 at Moon's Lake House near Saratoga Springs, New York. Fed up with a customer who continuously sent his fried potatoes back, complaining that they were soggy and not crunchy enough, Crum sliced the potatoes as thin as possible, fried them in hot grease, then doused them with salt. The customer loved them and "Saratoga Chips" quickly became a popular item at the lodge and throughout New England. Eventually, the chips were mass-produced for home consumption, but since they were stored in barrels or tins, they quickly went stale. Then, in the 1920s, Laura Scudder invented the airtight bag by ironing together two pieces of waxed paper, thus keeping the chips fresh longer. Today, chips are packaged in plastic or foil bags or cardboard containers and come in a variety of flavors, including sour cream and onion, barbecue, and salt and vinegar.
4. Slinky In 1943, naval engineer Richard James was trying to develop a spring that would support and stabilize sensitive equipment on ships. When one of the springs accidentally fell off a shelf, it continued moving, and James got the idea for a toy. His wife Betty came up with the name, and when the Slinky made its debut in late 1945, James sold 400 of the bouncy toys in 90 minutes. Today, more than 250 million Slinkys have been sold worldwide.
5. Saccharin Saccharin, the oldest artificial sweetener, was accidentally discovered in 1879 by researcher Constantine Fahlberg, who was working at Johns Hopkins University in the laboratory of professor Ira Remsen. Fahlberg's discovery came after he forgot to wash his hands before lunch. He had spilled a chemical on his hands and it, in turn, caused the bread he ate to taste unusually sweet. In 1880, the two scientists jointly published the discovery, but in 1884, Fahlberg obtained a patent and began mass-producing saccharin without Remsen. The use of saccharin did not become widespread until sugar was rationed during World War I, and its popularity increased during the 1960s and 1970s with the manufacture of Sweet'N Low and diet soft drinks.
6. Post-it Notes A Post-it note is a small piece of paper with a strip of low-tack adhesive on the back that allows it to be temporarily attached to documents, walls, computer monitors, and just about anything else. The idea for the Post-it note was conceived in 1974 by Arthur Fry as a way of holding bookmarks in his hymnal while singing in the church choir. He was aware of an adhesive accidentally developed in 1968 by fellow 3M employee Spencer Silver. No application for the lightly sticky stuff was apparent until Fry's idea. The 3M company was initially skeptical about the product's profitability, but in 1980, the product was introduced around the world. Today, Post-it notes are sold in more than 100 countries.
7. Silly Putty It bounces, it stretches, it breaks -- it's Silly Putty, the silicone-based plastic clay marketed as a children's toy by Binney & Smith, Inc. During World War II, while attempting to create a synthetic rubber substitute, James Wright dropped boric acid into silicone oil. The result was a polymerized substance that bounced, but it took several years to find a use for the product. Finally, in 1950, marketing expert Peter Hodgson saw its potential as a toy, renamed it Silly Putty, and a classic toy was born! Not only is it fun, Silly Putty also has practical uses -- it picks up dirt, lint, and pet hair; can stabilize wobbly furniture; and is useful in stress reduction, physical therapy, and in medical and scientific simulations. It was even used by the crew of Apollo 8 to secure tools in zero gravity.
8. Microwave Ovens The microwave oven is now a standard appliance in most American households, but it has only been around since the late 1940s. In 1945, Percy Spencer was experimenting with a new vacuum tube called a magnetron while doing research for the Raytheon Corporation. He was intrigued when the candy bar in his pocket began to melt, so he tried another experiment with popcorn. When it began to pop, Spencer immediately saw the potential in this revolutionary process. In 1947, Raytheon built the first microwave oven, the Radarange, which weighed 750 pounds, was 51/2 feet tall, and cost about $5,000. When the Radarange first became available for home use in the early 1950s, its bulky size and expensive price tag made it unpopular with consumers. But in 1967, a much more popular 100-volt, countertop version was introduced at a price of $495.
9. Corn Flakes In 1894, Dr. John Harvey Kellogg was the superintendent of the Battle Creek Sanitarium in Michigan. He and his brother Will Keith Kellogg were Seventh Day Adventists, and they were searching for wholesome foods to feed patients that also complied with the Adventists' strict vegetarian diet. When Will accidentally left some boiled wheat sitting out, it went stale by the time he returned. Rather than throw it away, the brothers sent it through rollers, hoping to make long sheets of dough, but they got flakes instead. They toasted the flakes, which were a big hit with patients, and patented them under the name Granose. The brothers experimented with other grains, including corn, and in 1906, Will created the Kellogg's company to sell the corn flakes. On principle, John refused to join the company because Will lowered the health benefits of the cereal by adding sugar.
CLICK FOR SOURCE
Want to earn more money? if yes, here are some of the methods. do join :)
1.mginger: mGinger is the first of its kind opt-in permission-based mobile marketing platform in India. mGinger is a service providing targeted advertisements on mobile phones. The advertisements are targeted on a consumer base who have opted-in to this service. The consumer base is built through a registration process in which the consumers specify their commercial interests, maximum number of ads they would like to receive in a day, convenient time-slots and their demographic information. Apart from getting information related to their particular interests, the consumers also receive monetary incentives for every ad they themselves receive and for each ad received in their network upto two levels of referrals. Advertisers leverage the service to search for and select consumers based on their commercial interests, location, demographics and other criteria and send specific advertisements to their target audience. And all this without the fear of incurring even a single consumer's wrath. The mGinger platform solves critical problems like content composition, cost of campaign and return on investment measurability for advertisers
CLICK HERE TO JOIN
2.YOUMINT:YouMint.com is owned by MobileTree Ltd, a company registered in England. MobileTree’s parent company is a London based holding company for 3 telecom related businesses and works closely with over 400 mobile operators globally i.e more than 60% of all mobile operators in the world!
YouMint has been awarded the “Most Innovative SME Solution Award” by HSBC. The Award was presented to our CEO in the presence of Mrs. Naina Lal Kidwai (CEO, HSBC India) by Mr. Ramraj Pai (Director, SME Ratings, CRISIL).
CLICK HERE TO JOIN
How does it work?
It is very simple. There are several ways you can use YouMint to your advantage!
1) Send Free SMS from YouMint.
As a YouMint member, you can send Free SMS to your friends. You can send more Free SMSs to them if they are on YouMint as well! So go ahead and invite all your friends to YouMint so you can send them Free SMSs.
More Invites = More Free SMSs. The more people you have in your network the more Free SMSs you can send everyday.
2) Get Paid for Incoming SMS and Emails. You get FULL control over how many, at what time and about what!
When you sign up with us, you will start receiving relevant promotional messages based on the number of messages you specify and the time you want to receive them. Each time you receive an SMS you will get paid for it! Each time you open the YouMint Cash Mail, you get paid.
3) Get Paid for promos sent to your network on YouMint.
You can invite your friends to join YouMint and we will pay you every time they interact with a promotion too! no kidding! We pay for you for your direct referrals and for your referrals’ referrals as well.
YOU – Rs 0.20 -
Your Referral - Rs 0.10 -
Your Referrals’
Referral - Rs 0.05
The Large Hadron Collider (LHC) is the world's largest and highest-energy complex, intended to collide opposing beams of protons (one of several types of hadrons) with very high kinetic energy. Its main purpose is to explore the validity and limitations of the Standard Model, the current theoretical picture for particle physics. It is theorized that the collider will confirm the existence of the Higgs boson. This would supply a crucial missing link in the Standard Model and explain how other elementary particles acquire properties such as mass. The LHC was built by the European Organization for Nuclear Research (CERN), and lies underneath the Franco-Swiss border between the Jura Mountains and the Alps near Geneva, Switzerland. It is funded by and built in collaboration with over eight thousand physicists from over eighty-five countries as well as hundreds of universities and laboratories. The LHC is operational and is presently in the process of being prepared for collisions. The first beams were circulated through the collider on 10 September 2008, and the first high-energy collisions are expected to take place after 6-8 weeks. Although there have been questions concerning the safety of the Large Hadron Collider in the media and even through the courts, the consensus in the scientific community is that there is no conceivable threat from the LHC particle collisions.
The LHC is the world's largest and highest-energy particle accelerator.[1][2] The collider is contained in a circular tunnel, with a circumference of 27 kilometres (17 mi), at a depth ranging from 50 to 175 metres underground.
The 3.8 m wide concrete-lined tunnel, constructed between 1983 and 1988, was formerly used to house the Large Electron-Positron Collider.It crosses the border between Switzerland and France at four points, with most of it in France. Surface buildings hold ancillary equipment such as compressors, ventilation equipment, control electronics and refrigeration plants.
The collider tunnel contains two adjacent parallel beam pipes that intersect at four points, each containing a proton beam, which travel in opposite directions around the ring. Some 1,232 dipole magnets keep the beams on their circular path, while an additional 392 quadrupole magnets are used to keep the beams focused, in order to maximize the chances of interaction between the particles in the four intersection points, where the two beams will cross. In total, over 1,600 superconducting magnets are installed, with most weighing over 27 tonnes. Approximately 96 tonnes of liquid helium is needed to keep the magnets at their operating temperature of 1.9 K, making the LHC the largest cryogenic facility in the world at liquid helium temperature.
Superconducting quadrupole electromagnets are used to direct the beams to four intersection points, where interactions between protons will take place.
Once or twice a day, as the protons are accelerated from 450 GeV to 7 TeV, the field of the superconducting dipole magnets will be increased from 0.54 to 8.3 tesla (T). The protons will each have an energy of 7 TeV, giving a total collision energy of 14 TeV (2.2 μJ). At this energy the protons have a Lorentz factor of about 7,500 and move at about 99.999999% of the speed of light. It will take less than 90 microsecond (μs) for a proton to travel once around the main ring – a speed of about 11,000 revolutions per second. Rather than continuous beams, the protons will be bunched together, into 2,808 bunches, so that interactions between the two beams will take place at discrete intervals never shorter than 25 nanoseconds (ns) apart. However it will be operated with fewer bunches when it is first commissioned, giving it a bunch crossing interval of 75 ns.
Prior to being injected into the main accelerator, the particles are prepared by a series of systems that successively increase their energy. The first system is the linear particle accelerator LINAC 2 generating 50 MeV protons, which feeds the Proton Synchrotron Booster (PSB). There the protons are accelerated to 1.4 GeV and injected into the Proton Synchrotron (PS), where they are accelerated to 26 GeV. Finally the Super Proton Synchrotron (SPS) is used to further increase their energy to 450 GeV before they are at last injected (over a period of 20 minutes) into the main ring. Here the proton bunches are accumulated, accelerated (over a period of 20 minutes) to their peak 7 TeV energy, and finally stored for 10 to 24 hours while collisions occur at the four intersection points.
The LHC will also be used to collide lead (Pb) heavy ions with a collision energy of 1,150 TeV. The Pb ions will be first accelerated by the linear accelerator LINAC 3, and the Low-Energy Injector Ring (LEIR) will be used as an ion storage and cooler unit. The ions then will be further accelerated by the PS and SPS before being injected into LHC ring, where they will reach an energy of 2.76 TeV per nucleon.
Detectors
The Large Hadron Collider's (LHC) CMS detectors being installed. Six detectors have been constructed at the LHC, located underground in large caverns excavated at the LHC's intersection points. Two of them, the ATLAS experiment and the Compact Muon Solenoid (CMS), are large, general purpose particle detectors.[2] A Large Ion Collider Experiment (ALICE) and LHCb have more specific roles and the last two TOTEM and LHCf are very much smaller and are for very specialized research. The BBC's summary of the main detectors is:
1.ATLAS – one of two so-called general purpose detectors. Atlas will be used to look for signs of new physics, including the origins of mass and extra dimensions.
CMS – the other general purpose detector will, like ATLAS, hunt for the Higgs boson and look for clues to the nature of dark matter.
ALICE – will study a "liquid" form of matter called quark-gluon plasma that existed shortly after the Big Bang.
LHCb – equal amounts of matter and anti-matter were created in the Big Bang. LHCb will try to investigate what happened to the "missing" anti-matter.
Purpose
A Feynman diagram of one way the Higgs boson may be produced at the LHC. Here, two quarks each emit a W or Z boson, which combine to make a neutral Higgs.A simulated event in the CMS detector, featuring the appearance of the Higgs boson. When in operation, about seven thousand scientists from eighty countries will have access to the LHC. It is theorized that the collider will produce the elusive Higgs boson, the last unobserved particle among those predicted by the Standard Model. The verification of the existence of the Higgs boson would shed light on the mechanism of electroweak symmetry breaking, through which the particles of the Standard Model are thought to acquire their mass. In addition to the Higgs boson, new particles predicted by possible extensions of the Standard Model might be produced at the LHC. More generally, physicists hope that the LHC will enhance their ability to answer the following questions:
Is the Higgs mechanism for generating elementary particle masses in the Standard Model indeed realised in nature?If so, how many Higgs bosons are there, and what are their masses?
Are electromagnetism, the strong nuclear force and the weak nuclear force just different manifestations of a single unified force, as predicted by various Grand Unification Theories?
Why is gravity so many orders of magnitude weaker than the other three fundamental forces? See also Hierarchy problem. Is Supersymmetry realised in nature, implying that the known Standard Model particles have supersymmetric partners? Will the more precise measurements of the masses and decays of the quarks continue to be mutually consistent within the Standard Model? Why are there apparent violations of the symmetry between matter and antimatter? What is the nature of dark matter and dark energy? Are there extra dimensions, as predicted by various models inspired by string theory, and can we detect them? Of the possible discoveries the LHC might make, only the discovery of the Higgs particle is relatively uncontroversial, but even this is not considered a certainty. Stephen Hawking said in a BBC interview that "I think it will be much more exciting if we don't find the Higgs. That will show something is wrong, and we need to think again. I have a bet of one hundred dollars that we won't find the Higgs." In the same interview Hawking mentions the possibility of finding superpartners and adds that "whatever the LHC finds, or fails to find, the results will tell us a lot about the structure of the universe."
As an ion collider
The LHC physics programme is mainly based on proton–proton collisions. However, shorter running periods, typically one month per year, with heavy-ion collisions are included in the programme. While lighter ions are considered as well, the baseline scheme deals with lead ions. This will allow an advancement in the experimental programme currently in progress at the Relativistic Heavy Ion Collider (RHIC). The aim of the heavy-ion programme is to provide a window on a state of matter known as Quark-gluon plasma, which characterized the early stage of the life of the Universe.
Test timeline
The first beam was circulated through the collider on the morning of 10 September 2008. CERN successfully fired the protons around the tunnel in stages, three kilometres at a time. The particles were fired in a clockwise direction into the accelerator and successfully steered around it at 10:28 local time.The LHC successfully completed its first major test: after a series of trial runs, two white dots flashed on a computer screen showing the protons traveled the full length of the collider. It took less than one hour to guide the stream of particles around its inaugural circuit. CERN next successfully sent a beam of protons in a counterclockwise direction, taking slightly longer at one and a half hours due to a problem with the cryogenics, with the full circuit being completed at 14:59.
The first "modest" high-energy collisions at a center-of-mass energy of 900 GeV are expected to take place at the beginning of the week starting on 22 September 2008. By 12 October 2008, i.e. before the official inauguration on 21 October 2008, the LHC should already operate at a reduced energy of 10 TeV. The winter shut-down (starting likely around end of November) will then be used to train[15] the superconducting magnets, such that the 2009 run will start at the full 14 TeV design energy.
Expected results
Once the supercollider is up and running, CERN scientists estimate that if the Standard Model is correct, a Higgs boson may be produced every few hours. At this rate, it may take up to three years to collect enough statistics unambiguously to discover the Higgs boson. Similarly, it may take one year or more before sufficient results concerning supersymmetric particles have been gathered to draw meaningful conclusions.
Proposed upgrade
CMS detector for LHC
Main article: Super Large Hadron Collider
After some years of running, any particle physics experiment typically begins to suffer from diminishing returns; each additional year of operation discovers less than the year before. The way around the diminishing returns is to upgrade the experiment, either in energy or in luminosity. A luminosity upgrade of the LHC, called the Super LHC, has been proposed,[16] to be made after ten years of LHC operation. The optimal path for the LHC luminosity upgrade includes an increase in the beam current (i.e., the number of protons in the beams) and the modification of the two high-luminosity interaction regions, ATLAS and CMS. To achieve these increases, the energy of the beams at the point that they are injected into the (Super) LHC should also be increased to 1 TeV. This will require an upgrade of the full pre-injector system, the needed changes in the Super Proton Synchrotron being the most expensive.
Cost
The total cost of the project is expected to be €3.2–6.4 billion. The construction of LHC was approved in 1995 with a budget of 2.6 billion Swiss francs (€1.6 billion), with another 210 million francs (€140 million) towards the cost of the experiments. However, cost over-runs, estimated in a major review in 2001 at around 480 million francs (€300 million) for the accelerator, and 50 million francs (€30 million) for the experiments, along with a reduction in CERN's budget, pushed the completion date from 2005 to April 2007.The superconducting magnets were responsible for 180 million francs (€120 million) of the cost increase. There were also engineering difficulties encountered while building the underground cavern for the Compact Muon Solenoid, in part due to faulty parts loaned to CERN by fellow laboratories Argonne National Laboratory, Fermilab, and KEK.
David King, the former Chief Scientific Officer for the United Kingdom, has criticised the LHC for taking a higher priority for funds than solving the Earth's major challenges; principally climate change, but also population growth and poverty in Africa.
Computing resources
The LHC Computing Grid is being constructed to handle the massive amounts of data produced by the Large Hadron Collider. It incorporates both private fiber optic cable links and existing high-speed portions of the public Internet, enabling data transfer from CERN to academic institutions around the world.
The distributed computing project LHC@home was started to support the construction and calibration of the LHC. The project uses the BOINC platform to simulate how particles will travel in the tunnel. With this information, the scientists will be able to determine how the magnets should be calibrated to gain the most stable "orbit" of the beams in the ring.
Safety issues
Safety of particle collisions
Main article: Safety of the Large Hadron Collider
The upcoming experiments at the Large Hadron Collider have sparked fears among the public that the LHC particle collisions might produce doomsday phenomena, including dangerous microscopic black holes and strange matter. Two CERN-commissioned safety reviews have examined these concerns and concluded that the experiments at the LHC present no danger and that there is no reason for concern, a conclusion expressly endorsed by the American Physical Society, the world's second largest organization of physicists.
Operational safety
The size of the LHC constitutes an exceptional engineering challenge with unique operational issues on account of the huge energy stored in the magnets and the beams.While operating, the total energy stored in the magnets is 10 GJ (equivalent to one and a half barrels of oil or 2.4 tons of TNT) and the total energy carried by the two beams reaches 724 MJ (about a tenth of a barrel of oil, or half a lightning bolt).Loss of only one ten-millionth part (10−7) of the beam is sufficient to quench a superconducting magnet, while the beam dump must absorb 362 MJ, an energy equivalent to that of burning eight kilograms of oil, for each of the two beams. These immense energies are even more impressive considering how little matter is carrying it: under nominal operating conditions (2,808 bunches per beam, 1.15×1011 protons per bunch), the beam pipes contain 1.0×10-9 gram of hydrogen, which, in standard conditions for temperature and pressure, would fill the volume of one grain of fine sand. On August 10, 2008, a group of hackers calling themselves the Greek Security Team defaced a website at CERN, criticizing their computer security. There was no access to the control network of the collider.
Construction accidents and delays
On 25 October 2005, a technician was killed in the LHC tunnel when a crane load was accidentally dropped.On 27 March 2007 a cryogenic magnet support broke during a pressure test involving one of the LHC's inner triplet (focusing quadrupole) magnet assemblies, provided by Fermilab and KEK. No one was injured. Fermilab director Pier Oddone stated "In this case we are dumbfounded that we missed some very simple balance of forces". This fault had been present in the original design, and remained during four engineering reviews over the following years.Analysis revealed that its design, made as thin as possible for better insulation, was not strong enough to withstand the forces generated during pressure testing. Details are available in a statement from Fermilab, with which CERN is in agreement.Repairing the broken magnet and reinforcing the eight identical assemblies used by LHC delayed the startup date, then planned for November 2007, by several weeks.
In popular culture
Aerial view of CERN and the surrounding region of Switzerland and France
The Large Hadron Collider was featured in Angels & Demons by Dan Brown, which involves dangerous antimatter created at the LHC used as a weapon against the Vatican. CERN published a "Fact or Fiction?" page discussing the accuracy of the book's portrayal of the LHC, CERN, and particle physics in general.The movie version of the book has footage filmed on-site at one of the experiments at the LHC; the director, Ron Howard, met with CERN experts in an effort to make the science in the story more accurate. CERN employee Katherine McAlpine's "Large Hadron Rap"surpassed three million YouTube views on 15 September 2008. BBC Radio 4 commemorated the switch-on of the LHC on 10 September 2008 with "Big Bang Day".Included in this event was a radio episode of the TV series Torchwood, with a plot involving the LHC, entitled Lost Souls.CERN's director of communications, James Gillies, commented, "The CERN of reality bears little resemblance to that of Joseph Lidster's Torchwood script."
Large hadron rap:
CERN and the LHC (Large Hadron Collider)
This video talks about the collimators used in the tunnel of the Large Hadron Collider or LHC. The LHC is the biggest ... This video talks about the collimators used in the tunnel of the Large Hadron Collider or LHC. The LHC is the biggest supercollider in the world and is also be largest machine in the world. Engineers at CERN chose National Instruments products to control the collimators. (more) (less)
CLICK HERE FOR SOURCE:
