Thursday, December 27, 2007

Why It’s Crucial To Have A Dedicated Server

by: Stu Pearson

A dedicated server is a single web server or computer on the internet that hosts websites and shows pages as viewers request. A dedicated server is within a network of computers, exclusively dedicated to one costumer or a large business, since it can meet many needs.

Dedicated servers are most commonly used in the web hosting industry; hundreds of sites are hosted under one dedicated server. A dedicated server is considered to be the next step on from shared hosting environments. Having your own dedicated server makes you free of worry about other websites slowing you down or crashing your server. Dedicated servers also give you total control, and allows for installing software on your website that opens doors for gaining extra performance.

The advantage of having a dedicated server is that the clients of the server can customize both hardware and software setup so they meet needs like faster data access and effortless accommodation of traffic on the site.

These dedicated servers come with good costumer service. The web host works with the client in making sure that the dedicated server meets the needs of the client. In cases of companies having several divisions like a chain of outlets, it is still crucial for each to have their own dedicated server because they can create many domains on a single server easily, which results in more efficiency rather than having to lease host space on different web servers for each division or outlet individually.

For large companies having websites for every dealership such as motorcycle manufacturers, the advantage of having a dedicated server is that the parent company can put all of the websites for each dealership under the same server.

Here’s how it works. Assume a company named “Taurus Cars Corp.” is running a series of dealerships, the parent company could host an individual website for every dealership. Each website may look like this; for the parent company “”, for the dealership in Florida “ “, for the dealership in Colorado “” and so on.

The advantage of this setup is the dealership in Florida is using the same online tools as the dealership in Colorado. Making the online presence of the company Taurus Motors Corp. very streamlined and cost-efficient, while making things significantly easier for the company’s administration and support, also known as customization and uniform of point of sale software. Considering the websites of each of the different divisions or dealerships residing on one dedicated server, makes this advantage beneficial in terms of increasing sales.

In some cases other business want to use dedicated servers for the sole purpose of customization, costumer service, and fast access. They host websites themselves or better yet sub-lease the extra space for interested companies to set up their websites and domains. The advantage derived by any business under a dedicated server is the ability to enhance security.

All these advantages imply that dedicated servers are the best option for most large companies or businesses. Hosting personal websites or small business websites in general doesn’t require a dedicated server, instead for this purpose you can lease from a standard web host.

source : articlecity

NetSuite IPO generates $185 million

NetSuite's initial public offering raised $185.4 million, close to double the amount the company originally forecast the IPO would generate and a positive sign for the market of hosted, software-as-a-service business applications.

NetSuite, a provider of hosted applications for small and midsize organizations, sold 6.2 million shares of common stock at $26 per share via auction and its underwriters acted on their option to buy an additional 930,000 shares, the company said Wednesday.

The $185.4 million is a gross amount from which underwriting discounts and expenses will be deducted, the company said. Moreover, 365,000 of the shares bought by the underwriters were owned by stockholders like NetSuite's CEO and chief technology officer, so the proceeds from those shares -- about $9.5 million -- won't go to the company.

In the final IPO prospectus, NetSuite had estimated that, if the underwriters exercised their option in full -- as they did -- NetSuite's IPO net proceeds would be approximately $161.9 million.

NetSuite initially set a range of $13 to $16 per share for its IPO but subsequently raised it and eventually set the price at $26 per share on Thursday of last week, the day when the stock started trading on the New York Stock Exchange under the "N" symbol.

At midmorning Wednesday, the stock was trading at $34.85, down about 10 percent from its previous close. Its highest point so far is $45.98.

Incorporated in 1998, NetSuite sells ERP (enterprise resource planning), CRM (customer relationship management) and e-commerce applications on a hosted, subscription-based model, in which customers access the software via the Internet and don't need to install it on their premises.

This software-as-a-service model, championed by companies like and Google, is considered a significant threat to the traditional approach from vendors like Microsoft of having customers implement software on their own PCs and servers.

As of Sept. 30 of this year, NetSuite had over 5,400 active customers, according to its IPO prospectus . In 2006, the company had revenue of $67.2 million and a net loss of $35.7 million. In the first nine months of this year, it generated $76.8 million in revenue and racked up a net loss of $20.6 million, according to the prospectus.

NetSuite plans to use its IPO proceeds to pay off an $8 million balance on a line of credit with Tako Ventures, an entity controlled by Oracle CEO Larry Ellison, and to possibly make acquisitions.

As of Nov. 30, Ellison controlled about 60 percent of NetSuite's outstanding stock -- some 31.9 million shares -- but Ellison transferred those shares into a holding company, NetSuite Restricted Holdings. The move is meant to "effectively eliminate" Ellison's voting power and avoid potential conflicts of interests, according to the prospectus.

NetSuite also plans to devote between $10 million and $15 million for capital expenditures, including the purchase of property, plant and equipment and the addition of a second data-center facility, and for working capital and other general purposes, including its international expansion. NetSuite currently provides its hosted applications out of a single data center, which the company admits in the prospectus is a risk that could harm its business should there be any service disruption at this facility.

NetSuite also said it may use a portion of the net proceeds to acquire other businesses, products or technologies, although it doesn't have any agreements or commitments for any specific acquisitions at the moment. The money it doesn't spend will be invested in short-term, interest-bearing investment grade securities.
Tropos Networks has set up a network of about 70 meshed Wi-Fi routers in Mecca, providing Hajjis with free Internet connectivity during their annual gathering

The millions of pilgrims in Mecca this week for the Hajj, an annual gathering of Muslims, can stay connected thanks to a temporary Wi-Fi mesh network covering a large part of the city.

Hajjis, as the pilgrims are called, come to the city in Saudi Arabia from around the world for several days of religious rituals. More than 2 million gather each year. A network of about 70 meshed routers from Tropos Networks has been set up to provide free Internet connectivity, according to Denise Barton, director of marketing at Tropos. Users only have to register before using it. Barton believes it is the first public Wi-Fi network set up for the Hajj.

Mesh networks are well-suited to temporary deployments because they need fewer fixed-line connections than do traditional Wi-Fi systems. Packets can hop from one router to another until they reach one that's connected to a landline. The technology has also been used for permanent municipal Wi-Fi networks, including the one Google had built with Tropos equipment in its hometown of Mountain View, California.

Saudi Arabia's Communications and Information Technology Commission appointed an Internet service provider, Bayanat Al-Oula, to provide the temporary network. It was rolled out in less than 60 days with no help from Tropos personnel, Barton said. Aptilo Networks, a wireless-management software and services company based in Stockholm, is running the network as a managed service from its offices in Malaysia. This also helped Bayanat get the infrastructure up and running quickly, said Per Knutsson, Aptilo's co-founder and director of product development. Aptilo is handling user authentication, security, customer-support calls and other features, as well as setting up the portal visitors use to register for the network.

Aptilo and Tropos are no strangers to temporary Wi-Fi networks. Aptilo has managed systems for large sporting events, and in 2004 Tropos built a network in downtown Redwood City, California, for a high-profile murder case. The trial of Scott Peterson, who was convicted of killing his wife and their unborn child, drew massive public and media attention. The county court where the trial took place set up a temporary network with five surveillance cameras and five meshed routers for its own security needs and for reporters who descended on the area for several months.

The Mecca network is made up of Tropos 5210 mesh routers, which use IEEE 802.11a for mesh connections and can support users with 802.11a, b and g devices, Barton said. Such networks typically require between 20 and 40 routers per square mile, she said.

By Stephen Lawson, IDG News Service
December 19, 2007

Cisco green plan looks beyond routers

Cisco Systems wants to turn the enterprise data network into an electricity meter.
InfoWorld InfoClipz

Using open standards, the company wants to get server and storage vendors to collect and share information about their equipment and send it to Cisco routers and switches. The data could include power consumption, operating temperature and more. It's becoming a critical job, and because the network touches all IT resources across the enterprise, data collection should happen there, according to Paul Marcoux, vice president of green engineering.

Marcoux joined Cisco from American Power Conversion only about six weeks ago after Cisco created the position to overlook energy issues across all parts of the company. Networking gear itself makes up a much smaller portion of IT power consumption than do servers or storage, but Cisco plans to go beyond just making its own products more efficient.

Power is a growing issue in datacenters as the cost of energy rises and concerns about global climate change increase. Being able to collect and analyze information about power usage is a big part of the battle and becoming more crucial in the age of virtualization, according to Marcoux. Distributing storage and processing cycles without regard for power issues is not just inefficient, it's dangerous, he said.

If virtualization software looks at a process that requires more computing power or storage space, then enlists servers or storage devices that are near to overheating or running out of power, it could send a rack of servers over the edge and shut it down, Marcoux said. For that reason, the virtualization system needs to know the power status of all the resources it may call upon, he said.

By the same token, consolidated datacenters typically serve many departments of an enterprise and consume a lot of power, but those groups generally don't have to pay for their part of the power. In fact, the electricity bill often bypasses even the IT department, going to building management instead, Marcoux said. Collecting data about the power consumed by each device, and eventually by individual transactions, would allow enterprises to bill each department for the power it uses, he said.

Software on routers and switches would collect the information and then take actions or forward it on to separate building management, energy management or virtualization control systems, Marcoux said. Given the large amount of energy data to be processed, Cisco may introduce daughtercards for its platforms to provide extra computing power, he said. He hopes the technology will be in place and collecting information in enterprises within three years.

Because datacenters contain gear from so many vendors, open standards are the only way to make such a system work, according to Cisco. Fortunately, there already are several available standards, Marcoux said. Having standards already in place will help speed up adoption, Marcoux said.

"We're not trying to reinvent the wheel, we're just trying now to utilize the wheel," Marcoux said.

Cisco's proposal would represent a whole new role for networks beyond communications, said Burton Group analyst Dave Passmore. Server vendors might go along with the plan, but Cisco can't count on smooth sailing, he said. Centralized power regulation would play a role in overall management of the datacenter, an area where Cisco is attempting to make inroads with other initiatives as well.

"Who controls virtualization in the data center is going to be the new battleground," Passmore said.

source :

Monday, October 8, 2007

Playstation 3 New Face


Lagging a full year behind the release of Microsoft's Xbox 360 and lacking the immediately attention-grabbing hook of Nintendo's 360-degree motion-sensing Wii, Sony's long-awaited PlayStation 3 has recently been the subject of much heated debate. Despite its obvious appeal to diehard gamers and fans of the world's most popular console brand – not to mention home theater enthusiasts, what with 1080p HDMI output and extensive online music/video download capabilities – questions have been plentiful.

For example: Is the system, available in 20GB ($499, sans WiFi and a built-in combination Memory Stick product lineup, Compact Flash and SD/MMC card reader) or $599 chrome-trimmed wireless-ready 60GB hard drive models, worth the hefty asking price, the highest since early-'90s systems like CDi and 3DO? Can Sony, who's recently cut back North American November 17th launch date ship projections to just 400,000 units (with some analysts predicting actual distribution of half this number or fewer machines), manage to avoid aggravating a soon-to-be-device-deprived buying public while still keeping up with the competition? And, of course, with so much power and hardware combined in a single unit catered to the highest-end luxury users, is there even a point to upgrading?

The short answer to all: Yes, depending which of school of thought you fall into, your game playing habits and how much disposable income you've got to burn. However, let's get one thing out of the way up-front, before you freeze your poor behind off spending all night camped out in front of the local electronics retailer hoping to score one of the severely under-stocked devices. For a host of reasons ranging from technical niggles to launch lineup shortfalls to pure common sense, it's perfectly fine – and in most cases, even advisable – to skip buying one this holiday season and wait until the dust settles sometime early in 2007.

Right from the get-go, it's important to consider the following fact: You're not actually buying a videogame console here (although surely, that's the machine's strength and the chief function most prospective buyers intend to employ it towards) so much as a full-fledged digital media hub. As slick as everything from cutting-edge digital diversions and Blu-ray movies – video resolutions ranging all the way from 480i up to an eye-popping 1080p are supported – it's what you personally make of the machine that gives the gizmo its true value. So for all of you who've been pestered since, oh, 2004 by your wide-eyed little pride and joys, remember: Dropping $599 just so kids can use the beast as an overgrown Atari may be a little much. They'll be just as entertained by lower-resolution outings for other systems like Nintendo's Wii or Sony's own PlayStation 2. And, in truth, most PlayStation 3 titles right now are simply enhanced ports of existing products anyway (see offerings like Tony Hawk's Project 8 or NHL 2K7). What's more, unless you plan on clocking in time behind the controller yourself, investing in a library of next-generation movies, browsing the Web on your TV, purchasing extra levels/cars/characters/songs/films online or are intent on building the ultimate technophile's living room setup, it's the sort of holiday gift that may be little extravagant for anyone younger than 15.

Thursday, September 27, 2007

Virtual Local Area Network

Virtual LAN
From Wikipedia, the free encyclopedia

A virtual LAN, commonly known as a vLAN or as a VLAN, is a method of creating independent logical networks within a physical network. Several VLANs can co-exist within such a network. This helps in reducing the broadcast domain and aids in network administration by separating logical segments of a LAN (like company departments) that should not exchange data using a LAN (they still can exchange data by routing).
A VLAN consists of a network of computers that behave as if connected to the same link layer network - even though they may actually be physically connected to different segments of a LAN. Network administrators configure VLANs through software rather than hardware, which makes them extremely flexible. One of the biggest advantages of VLANs emerges when physically moving a computer to another location: it can stay on the same VLAN without the need for any hardware reconfiguration.

Increase the number of broadcast domains but reduce the size of each broadcast domain, which in turn reduces network traffic and increases network security (both of which are hampered in cases of single large broadcast domains).
Reduce management effort to create subnetworks.
Reduce hardware requirement, as networks can be separated logically instead of physically.
Increase control over multiple traffic types.
Create multiple logical switches in a physical switch.

Protocols and design
The primary protocol currently used in configuring virtual LANs is IEEE 802.1Q, which describes how traffic on a single physical network can be partitioned into virtual LANs by tagging each frame or packet with extra bytes to denote which virtual network the packet belongs to.
Prior to the introduction of the 802.1Q standard, several proprietary protocols existed, such as Cisco's ISL (Inter-Switch Link, a variant of IEEE 802.10) and 3Com's VLT (Virtual LAN Trunk). ISL is no longer supported by Cisco.
Early network designers often configured VLANs with the aim of reducing the size of the collision domain in a large single Ethernet segment and thus improving performance. When Ethernet switches made this a non-issue (because each switch port is a collision domain), attention turned to reducing the size of the broadcast domain at the MAC layer. Virtual networks can also serve to restrict access to network resources without regard to physical topology of the network, although the strength of this method remains debatable as VLAN Hopping is a common means of bypassing such security measures.
Virtual LANs operate at Layer 2 (the data link layer) of the OSI model. However, administrators often configure a VLAN to map directly to an IP network, or subnet, which gives the appearance of involving Layer 3 (the network layer).
In the context of VLANs, the term "trunk" denotes a network link carrying multiple VLANs, which are identified by labels (or "tags") inserted into their packets. Such trunks must run between "tagged ports" of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather than links to hosts. (Confusingly, the term 'trunk' is also used for what Cisco calls "channels" : Link Aggregation or Port Trunking). A router (Layer 3 device) serves as the backbone for network traffic going across different VLANs.
On Cisco devices, VTP (VLAN Trunking Protocol) allows for VLAN domains, which can aid in administrative tasks. VTP also allows "pruning", which involves directing specific VLAN traffic only to switches which have ports on the target VLAN.

Assigning VLAN memberships
The four methods of assigning VLAN memberships that are in use are:
Port-based: A switch port is manually configured to be a member of a VLAN. In order to connect a port to several VLANs (for example, a link with VLANs spanning over several switches) the port has to be member of a trunk. Only one VLAN on a port can be set untagged (3Com's term) or access mode (Cisco's term); the switch will add this VLAN's tags to untagged received frames and remove this VLAN's tag from transmitted frames.
MAC-based: VLAN membership is based on the MAC address of the workstation. The switch has a table listing the MAC address of each machine, along with the VLAN to which it belongs.
Protocol-based: Layer 3 data within the frame is used to determine VLAN membership. For example, IP machines can be classified as the first VLAN, and AppleTalk machines as the second. The major disadvantage of this method is that it violates the independence of the layers, so an upgrade from IPv4 to IPv6, for example, will cause the switch to fail.
Authentication based: Devices can be automatically placed into VLANs based on the authentication credentials of a user or device using the 802.1x protocol.

Port-based VLANs
A port based VLAN switch determines the membership of a data frame by examining the configuration of the port that received the transmission or reading a portion of the data frame’s tag header. A four-byte field in the header is used to identify the VLAN. This VLAN identification indicates what VLAN the frame belongs to. If the frame has no tag header, the switch checks the VLAN setting of the port that received the frame.


Wednesday, September 12, 2007


Wireless operations permits services, such as long range communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g., radio transmitters and receivers, remote controls, computer networks, network terminals, etc.) which use some form of energy (e.g. radio frequency (RF), infrared light, laser light, visible light, acoustic energy, etc.) to transfer information without the use of wires . Information is transferred in this manner over both short and long distances.
Wireless communication may be via:
radio frequency communication,
microwave communication, for example long-range line-of-sight via highly directional antennas, or short-range communication, or
infrared (IR) short-range communication, for example from remote controls or via IRDA,
Applications may involve point-to-point communication, point-to-multipoint communication, broadcasting , cellular networks and other wireless networks.
The term "wireless" should not be confused with the term "cordless", which is generally used to refer to powered electrical or electronic devices that are able to operate from a portable power source (e.g., a battery pack) without any cable or cord to limit the mobility of the cordless device through a connection to the mains power supply. Some cordless devices, such as cordless telephones, are also wireless in the sense that information is transferred from the cordless telephone to the telephone's base unit via some type of wireless communications link. This has caused some disparity in the usage of the term "cordless", for example in Digital Enhanced Cordless Telecommunications.
In the last 50 years, wireless communications industry experienced drastic changes driven by many technology innovations

Wednesday, July 25, 2007

System console

The system console, root console or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger.

On traditional minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a DEC VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM, still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect his terminal to any of the attached servers.

On PCs, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet.

Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup.

It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources.

Routers and Managed Switches (as well as other networking and telecoms equipment) may also have console ports in particular Cisco Systems routers and switches that use Cisco IOS are normally configured via their console ports.

From Wikipedia, the free encyclopedia

White screen of death

n Windows 95, Windows 98, Windows Me, Windows 2000, and (possibly) also Windows NT, a White Box of Doom or White screen of death will appear if an error occurs while loading a system-critical application or process, such as the system shell (e.g. Explorer.exe). It has a similar appearance to a General Protection Fault, or GPF.

Clicking "OK" will normally cause the computer to shut down, or it may result in another similar error. This error may also appear if, at startup, an illegal operation occurs during the startup process. Also it will make your computer freeze and you have to shut the computer down.

Some computers, such as Dell computers, have boot errors that result in a WSOD
Some computers, such as Dell computers, have boot errors that result in a WSOD

This error may occasionally be ignored, as it may just have been a single occurrence of a failed startup sequence; however, if it should occur repeatedly, the user may then be required to reinstall Windows.

A way to work around this if you don't want to install Windows is to set the shell to Winfile.exe.

BSOD ( Blue Screen of Death)

Windows NT

In Windows NT, Windows 2000, Windows XP, Windows Server 2003, and Windows Vista, the blue screen of death occurs when the kernel or a driver running in kernel mode encounters an error from which it cannot recover. This is usually caused by an illegal operation being performed. The only safe action the operating system can take in this situation is to restart the computer. As a result, data may be lost, as users are not given an opportunity to save data that has not yet been saved to the hard drive.

Blue screens are known as "Stop errors" in the Windows Resource Kit documentation. They are referred to as "bug checks" in the Windows Software development kit and Driver development kit documentation.

Windows 2000 (can also be configured to display debug info like the Windows NT example)
Windows 2000 (can also be configured to display debug info like the Windows NT example)

The text on the error screen contains the code of the error as well as its symbolic name (e.g. 0x0000001E, KMODE_EXCEPTION_NOT_HANDLED) along with four error-dependent values in parentheses that are there to help software engineers with fixing the problem that occurred. Depending on the error code, it may display the address where the problem occurred, along with the driver which is loaded at that address. Under Windows NT and 2000, the second and third sections of the screen may contain information on all loaded drivers and a stack dump, respectively. The driver information is in three columns; the first lists the base address of the driver, the second lists the driver's creation date (as a Unix timestamp), and the third lists the name of the driver.

By default, Windows will create a memory dump file because a blue screen error occurs. Depending on the OS version, there may be several formats this can be saved in, ranging from a 64 KB "mini dump" to a "complete dump" which is effectively a copy of the entire contents of physical RAM. The resulting memory dump file may be debugged later, using a kernel debugger. A debugger is necessary to obtain a stack trace, and may be required to ascertain the true cause of the problem; as the information onscreen is limited and thus possibly misleading, it may hide the true source of the error.

Windows NT 3.5
Windows NT 3.5

Microsoft Windows can also be configured to send live debugging information to a kernel debugger running on a separate computer. (Windows XP also allows for kernel debugging from the machine that is running the OS.) If a blue screen error is encountered while a live kernel debugger is attached to the system, Windows will halt execution and cause the debugger to "break in", rather than displaying the BSOD. The debugger can then be used to examine the contents of memory and determine the source of the problem.

The Windows debugger is available as a free download from Microsoft.

Windows includes a feature that can be used to cause a blue screen manually. To enable it, the user must add a value to the Windows registry. After that, a BSOD will appear when the user presses the SCROLL LOCK key twice while holding the right CTRL key.[3] This feature is primarily useful for obtaining a memory dump of the computer while it is in a given state. As such, it is generally used to aid in troubleshooting system hangs.

By default, Windows XP is configured to save only a 64K minidump when it encounters a blue screen, and then to automatically reboot the computer. Because this process happens very quickly, the blue screen may be seen only for an instant or not at all. Users have sometimes noted this as a random reboot rather than a traditional stop error, and are only aware of an issue after Windows reboots and displays a notification that it has recovered from a serious error.

A BSOD can also be caused by a critical boot loader error, where the operating system is unable to access the boot partition due to incorrect storage drivers or similar problems. The error code in this situation is STOP 0x0000007B (INACCESSIBLE_BOOT_DEVICE). In such cases, there is no memory dump saved. Since the system is unable to boot from the hard drive in this situation, correction of the problem often requires booting from the Microsoft Windows CD. After booting to the CD, it may be possible to correct the problem by performing a repair install or by using the Recovery Console (with CHKDSK).

The color blue was chosen because there was a version of Windows NT for the DEC Alpha platform and there the console colors could not be changed in an easy way. For consistency reasons blue became the color for Stop errors on all platforms (alpha/i386/mips/ppc).



ReactOS, an attempt at creating a free software/open source implementation of a Windows NT-compatible operating system, also features its own BSOD similar to the Windows NT/XP one.

Windows 9x
Windows 9x

Windows 9x/Me

The blue screen of death also occurs in Microsoft's home desktop operating systems Windows 95, 98, and Me. Here it is less serious, but more common. In these operating systems, the BSOD is the main way for virtual device drivers to report errors to the user. It is internally referred to by the name of "_VWIN32_FaultPopup". A Windows 9x/Me BSOD gives the user the option either to restart or continue. However, VxDs do not display BSODs frivolously — they usually indicate a problem which cannot be fixed without restarting the computer, and hence after a BSOD is displayed the system is usually unstable or unresponsive.

Two of the most common reasons for BSODs are:

  • Problems that occur with incompatible versions of DLLs. This cause is sometimes referred to as DLL hell. Windows loads these DLLs into memory when they are needed by application programs; if versions are changed, the next time an application loads the DLL it may be different from what the application expects. These incompatibilities increase over time as more new software is installed, and is one of the main reasons why a freshly-installed copy of Windows is more stable than an "old" one.
  • Faulty or poorly written device drivers, hardware incompatibilities, or damaged hardware may also cause a BSOD. If you have just installed a new piece of hardware, updated a driver, or installed an Operating System update shortly before you see the BSOD, be sure to investigate these causes also.

In Windows 95 and 98, a BSOD occurred when the system attempted to access the file "c:\con\con" on the hard drive. This was often inserted on websites to crash users' machines. Microsoft has released a patch for this.

The BSOD can appear if a user ejects a removable medium while it is being read on 9x/ME. This is particularly common while using Microsoft Office: if a user simply wants to view a document, he might eject a floppy disk before exiting the program. Since Microsoft Office always creates a temporary file in the same directory, it will trigger a BSOD upon exiting because it will attempt to delete the file on the disk that is no longer in the drive.

This type of blue screen is no longer seen in Windows NT, 2000, and XP. In the case of these less serious software errors, the program may still crash, but it will not take down the entire operating system with it due to better memory management and decreased legacy support. In these systems, the "true" BSOD is seen only in cases where the entire operating system crashes.

Windows CE 5.0
Windows CE 5.0

Windows CE

The simplest version of the blue screen occurs in Windows CE except the versions for Pocket PC. The blue screen in Windows CE 3.0 is similar to the one in Windows 95 and 98.

Windows for Workgroups 3.11
Windows for Workgroups 3.11

Windows for Workgroups

Windows for Workgroups' Blue Screen of Death is very similair to the Windows 9x BSoD.

Xbox Error Message
Xbox Error Message


Although the Microsoft Xbox usually shows a Green Screen of Death when a critical error occurs, this model was seen showing a BSOD during the presentation of Forza Motorsport at the CeBIT computer fair in Hannover in March 2005.

Boot Loader Function

Boot loader

Most computer systems can only execute code found in the memory (ROM or RAM). Modern operating systems are stored on hard disks, or occasionally on LiveCDs, USB flash drives, or other non-volatile storage devices. When a computer is first powered on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complex actions such as loading a program from disk, so an apparent paradox exists: to load the operating system into memory, one appears to need to have an operating system already loaded.

The solution is to use a special small program, called a bootstrap loader, bootstrap or boot loader. This program's only job is to load other software for the operating system to start. Often, multiple-stage boot loaders are used, in which several small programs of increasing complexity summon each other, until the last of them loads the operating system. The name bootstrap loader comes from the image of one pulling oneself up by one's bootstraps (see bootstrapping). It derives from the very earliest days of computers and is possibly one of the oldest pieces of computer terminology in common use.

Early programmable computers had a row of toggle switches on the front panel to allow the operator to manually enter the binary boot instructions into memory before transferring control to the CPU. The boot loader would then read the operating system in from an outside storage medium such as paper tape, punched card, or a disk drive.

Pseudo-assembly code for the bootloader might be as simple as the following eight instructions:

0: set the P register to 8
1: check paper tape reader ready
2: if not ready, jump to 1
3: read a byte from paper tape reader to accumulator
4: if end of tape, jump to 8
5: store accumulator to address in P register
6: increment the P register
7: jump to 1

A related example is based on a loader for a 1970's Nicolet Instrument Corporation minicomputer. Note that the bytes of the second-stage loader are read from paper tape in reverse order.

0: set the P register to 106
1: check paper tape reader ready
2: if not ready, jump to 1
3: read a byte from paper tape reader to accumulator
4: store accumulator to address in P register
5: decrement the P register
6: jump to 1

The length of the second stage loader is such that the final byte overwrites location 6. After the instruction in location 5 executes, location 6 starts the second stage loader executing. The second stage loader then waits for the much longer tape containing the operating system to be placed in the tape reader. The difference between the boot loader and second stage loader is the addition of checking code to trap paper tape read errors, a frequent occurrence with the hardware of the time, which in this case was an ASR-33 teletype.

In modern computers the bootstrapping process begins with the CPU executing software contained in ROM (for example, the BIOS of an IBM PC) at a predefined address (the CPU is designed to execute this software after reset without outside help). This software contains rudimentary functionality to search for devices eligible to participate in booting, and load a small program from a special section (most commonly the boot sector) of the most promising device. It is usually possible to configure the BIOS so that only a certain device can be booted from and/or to give priority to some devices over others (a CD or DVD drive is usually given priority over a hard disk, for instance).

Boot loaders may face peculiar constraints, especially in size; for instance, on the IBM PC and compatibles, the first stage of boot loaders located on hard drives must fit into the first 446 bytes of the Master Boot Record, in order to leave room for the 64-byte partition table and the 2-byte 0xAA55 'signature', which the BIOS requires for a proper boot loader.

Some operating systems, most notably pre-1995 Macintosh systems from Apple, are so closely interwoven with their hardware that it is impossible to natively boot an operating system other than the standard one. A common solution in such situations is to design a bootloader that works as a program belonging to the standard OS that hijacks the system and loads the alternative OS. This technique was used by Apple for its A/UX Unix implementation and copied by various freeware operating systems and BeOS Personal Edition 5.

What is msconfig

MSConfig, or System Configuration Utility, is a boot configuration utility bundled with all Microsoft Windows operating systems released after 1995 except Windows 2000. Windows 2000 users can download the utility separately, however. This tool modifies which programs run at startup, edits certain configuration files, and simplifies controls over Windows services. Part of the base Windows install, it can be accessed by running 'msconfig' on any system on which the user has administrator access.

Files that can be edited through MSConfig include AUTOEXEC.BAT, CONFIG.SYS, WIN.INI, SYSTEM.INI on Windows 9x systems, and BOOT.INI on Windows NT systems. The chief benefit to using MSCONFIG to edit these files is that it provides a simplified GUI to indirectly manipulate the sections of those files and the Windows registry tree pertaining to the Windows boot sequence.

The most recent version of MSConfig, released with Windows Vista, features a greatly simplified user interface and increased support for managing services.

Causes and cures of defragments action needed

Causes and cures

Fragmentation occurs when the operating system cannot or will not allocate enough contiguous space to store a complete file as a unit, but instead puts parts of it in gaps between other files (usually those gaps exist because they formerly held a file that the operating system has subsequently deleted or because the operating system allocated excess space for the file in the first place). As advances in technology bring larger disk drives, the performance loss due to fragmentation squares with each doubling of the size of the drive.[citation needed] Larger files and greater numbers of files also contribute to fragmentation and consequent performance loss. Defragmentation attempts to alleviate these problems.

Consider the following scenario, as shown by the image on the right:

An otherwise blank disk has 5 files, A, B, C, D and E each using 10 blocks of space (for this section, a block is an allocation unit of that system, it could be 1K, 100K or 1 megabyte and is not any specific size). On a blank disk, all of these files will be allocated one after the other. (Example (1) on the image.) If file B is deleted, there are two options, leave the space for B empty and use it again later, or compress all the files after B so that the empty space follows it. This could be time consuming if there were hundreds or thousands of files which needed to be moved, so in general the empty space is simply left there, marked in a table as available for later use, then used again as needed.[1] (Example (2) on the image.) Now, if a new file, F, is allocated 7 blocks of space, it can be placed into the first 7 blocks of the space formerly holding the file B and the 3 blocks following it will remain available. (Example (3) on the image.) If another new file, G is added, and needs only three blocks, it could then occupy the space after F and before C. (Example (4) on the image). Now, if subsequently F needs to be expanded, since the space immediately following it is no longer available, there are two options: (1) add a new block somewhere else and indicate that F has a second extent, or (2) move the file F to someplace else where it can be created as one contiguous file of the new, larger size. The latter operation may not be possible as the file may be larger than any one contiguous space available, or the file conceivably could be so large the operation would take an undesirably long period of time, thus the usual practice is simply to create an extent somewhere else and chain the new extent onto the old one. (Example (5) on the image.) Repeat this practice hundreds or thousands of times and eventually the file system has many free segments in many places and many files may be spread over many extents. If, as a result of free space fragmentation, a newly created file (or a file which has been extended) has to be placed in a large number of extents, access time for that file (or for all files) may become excessively long.

The process of creating new files, and of deleting and expanding existing files, may sometimes be colloquially referred to as churn, and can occur at both the level of the general root file system, but in subdirectories as well. Fragmentation not only occurs at the level of individual files, but also when different files in a directory (and maybe its subdirectories), that are often read in a sequence, start to "drift apart" as a result of "churn".

A defragmentation program must move files around within the free space available to undo fragmentation. This is a memory intensive operation and cannot be performed on a file system with no free space. The reorganization involved in defragmentation does not change logical location of the files (defined as their location within the directory structure).

Another common strategy to optimize defragmentation and to reduce the impact of fragmentation is to partition the hard disk(s) in a way that separates partitions of the file system that experience many more reads than writes from the more volatile zones where files are created and deleted frequently. In Microsoft Windows, the contents of directories such as "\Program Files" or "\Windows" are modified far less frequently than they are read. The directories that contain the users' profiles are modified constantly (especially with the Temp directory and Internet Explorer cache creating thousands of files that are deleted in a few days). If files from user profiles are held on a dedicated partition (as is commonly done on UNIX systems), the defragmenter runs better since it does not need to deal with all the static files from other directories. For partitions with relatively little write activity, defragmentation performance greatly improves after the first defragmentation, since the defragmenter will need to defrag only a small number of new files in the future.



Defragging the disk will not stop a system from malfunctioning or crashing because the filesystem is designed to work with fragmented files. [2] Since defrag cannot be run on a filesystem marked as dirty without first running chkdsk [3], a user who intends to run defrag "to fix a system acting strangely" often ends up running chkdsk, which repairs file system errors, the end result of which may mislead the user into thinking that defrag fixed the problem when it was actually fixed by chkdsk.

In fact, in a modern multi-user operating system, an ordinary user cannot defragment the system disks since superuser access is required to move system files. Additionally, modern file systems such as NTFS are designed to decrease the likelihood of fragmentation. [4] Improvements in modern hard drives such as RAM cache, faster platter rotation speed, and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits. However, modern systems profit enormously from the huge disk capacities currently available, since partially filled disks fragment much less than full disks. [5] In any case, these limitations of defragmentation have led to design decisions in modern operating systems like Windows Vista to automatically defragment in a background process but not to attempt to defragment a volume 100% because doing so would only produce negligible performance gains. [6]

Kinds of File System in a Computer


  • FAT DOS 6.x and Windows 9x-systems come with a defragmentation utility called Defrag. The DOS version is a limited version of Norton SpeedDisk.
  • NTFS Windows 2000 and newer include an online defragmentation tool based on Diskeeper. NT 4 and below do not have built-in defragmentation utilities.
  • ext2 uses an offline defragmenter called e2defrag, which does not work with its successor ext3, unless the ext3 filesystem is temporarily down-graded to ext2.
  • JFS has a defragfs utility on IBM operating systems.
  • HFS Plus in 1998 introduced a number of optimizations to the allocation algorithms in an attempt to defragment files while they're being accessed without a separate defragmenter.
  • XFS provides an online defragmentation utility called xfs_fsr.

Aims of defragmentation

Aims of defragmentation

Reading and writing data on a heavily fragmented file system is slowed down as the time needed for the disk heads to move between fragments and waiting for the disk platter to rotate into position is increased (see seek time and rotational delay). For many common operations, the performance bottleneck of the entire computer is the hard disk; thus the desire to process more efficiently encourages defragmentation. Operating system vendors often recommend periodic defragmentation to keep disk access speed from degrading over time.

Fragmented data also spreads over more of the disk than it needs to. Thus, one may defragment to gather data together in one area, before splitting a single partition into two or more partitions (for example, with GNU Parted, or PartitionMagic).

Defragmenting may help to increase the life-span of the hard drive itself, by minimizing head movement and simplifying data access operations.

Saturday, July 7, 2007

How "Seo Elite" Can Bring Your Web Site To #1 Position In Google Within 3 Weeks!

by: Jhong Ren

Do you have a web site and very few people know that you and your business exist? Well, it happened to mine.

I was lost when it comes to “Search Engine Optimization”. I absolutely had no idea until my friend recommended SEO Elite.

With a lot of similar web sites targeting the same niche as mine, it has become very hard to come on first on Google.

When I first had my blog up and I did a search in various search engines like Google and Yahoo!, my blog is no-where in sight. So how has SEO Elite brought my web site to #1 position within weeks? Let me share with you what I know so that you can immediately execute these steps.

1. SEO Elite comes an excellent software that automatically search for backlinks of websites. So what I did was I search for backlinks for the number one site when I Googled the keywords.

2. Yes, you can use free wares in the internet but when I compare the search results, SEO Elite gave me more results, in terms of quality and quantity.

3. Next, I asked for reciprocal links of the sites or submitted articles to directories.

4. Ask for quality links from sites with at least Page Rank of 4.

5. And again, ensure your blog has quality content such that the sites find it added value to link you.

6. The best thing I realize is I knew exactly what my competitors are doing and how did they earn money using their sites. I’m telling you you will be very surprised by what your competitors are doing too.

So What SEO Elite had taught me:

1) Concept of online and off line page optimization.

a) Online optimization is about using various Meta tags, Meta keywords, Paragraph, Heading, Bolds, Italics, and Anchor tags. It also tells me to use proper keyword density in a web site (about 6 - 8%).

b) Off line optimization is about getting quality links to my site.

2) SEO Elite taught me what I need to know about my competitor web site, so I could implement the same strategies that they used. And the trick here is I implemented even better than them so I could overtake their #1 ranking position within weeks.

3) SEO Elite helped me to track backlinks as well as Page Rank of your competitors’ site so that I could copy their strategies.

4) SEO Elite comes with a software that automates all these, leaving me with much more time to write articles such as this to benefit you.

You may be telling me that you can do the tasks manually. Yes, it works if you are asking for links exchange from ten sites. But what if there are hundreds of them or even thousands. What are you going to do? Still manually?

So the good thing about SEO Elite is that it allows auto-submission of link exchange request, submission of articles to article directories and even checking if other sites are still linking you. SEO Elite can be the SEO resource for you.

source link

Facts You Should Know About Spyware

by: Arvind Singh

Every PC user must control and arrest spyware through antispyware and adware removal tools. Spyware comes in various forms and the most commonly found is the illegally installed spyware. It is used to secretly get information in unethical ways, which is why good antispyware software needs to be used against it.

Spyware Avatars

As software that picks up information from your computer without your consent, Spyware assumes many forms. It can be Trojans, web bugs, adware and commercial software that are used to keep an eye over someone's computer to track what they are doing or to illegally get their secret information like passwords to bank accounts etc. Trojan software gets into your system by duping you into thinking it is something else, just like viruses. Web bugs come as ActiveX controls and cookies that just follow you around as you browse the web. Once they know your habits, they show you popups with advertisements they think you might be interested in. This sort of software or remote administration software can be stopped by firewalls.

The commercial computer monitoring spyware software includes URL recorders, key loggers, chat and screen recorders, program loggers etc. and antispyware can guard you from this. The key loggers just track all your keystrokes, which means just about everything you do on your computer. Then you have those screen loggers that can just take a picture of your screen, in spite of you having firewalls installed.

Can Cleaning Your Registry Or Deleting Your Startup Items Help?

The problem with spyware and adware is that they run as hidden files so they don't show up on the task list, the registry, or the startup items. They lodge themselves where startup cleaners cannot find them and they run invisibly. But good antispyware software or adware cleaner can find them and eliminate them.

The seriousness of running antispyware cannot be emphasized enough

source link

Not All Spyware Is Malicious But Must Be Removed

by: Arvind Singh

Though not all Spyware is malicious they must be removed all the same. The reason being that they will ultimately expand the registry of the system and stall the programs and generally make the system unstable.

Some people believe that you may not always need to remove spyware because Spyware is not always malicious. There are many kinds of Spyware that can infect your PC. Most of them are, thankfully, not uploaded with malicious intent. But why take chances. First of all let us analyze how Spyware comes to get onto the system in the first place. People who browse the Internet come across many sites that offer free download. Clicking on these downloads may bring with it some program in the background or hidden from the user. This may be in the form of ActiveX controls or components. When we download certain programs the download program flashes a message requiring the user to allow the download of an ActiveX control without which the download will not work. This ActiveX control is registered in the CLSID files of the registry. Once the download is complete the ActiveX control stored as a .OCX file begins its stealthy work. Best Spyware removal programs are especially weary of ActiveX controls and remove any malicious looking .OCX files.

Not All Spyware Cause Damage

Spyware is not always illegal. You may have inadvertently agreed to the use of the Spyware by clicking on the 'agree' button without reading the agreement. This kind of Spyware, however, does not carry out any malicious activity. It just collects data off all the Internet sites you visit and mails the information to the host where it came from. This does not mean that you cannot remove it when you want. You can do so with certain free Spyware Virus removal programs that search and remove adware and Spyware as well. There are malicious and downright criminal Spyware as well. This type of Spyware locates personal information such as that pertaining to your credit card or online bank information and is used for criminal activities usually causing a lot of damage to you. The system must be regularly scanned with free Spyware Adware removal utilities and then the registry cleaned with a registry cleaner to make sure the system is always secure.

Run Anti Spyware Every Time You Think Of It

Spyware can be removed from the system by using anti Spyware programs. However the anti Spyware software does not remove the entries of the ware from the registry. Special free anti Spyware removal software is required to do just that. Free anti Spyware removal software such as Microsoft Spyware removal tool, Yahoo Spyware removal utility or any of the free Spyware removal tools can scan the registry for broken links and useless entries that are no longer linked to programs. These free Spyware removal tools will then remove these entries from the registry freeing up disk space and compacting the registry for efficient use.

source link

Friday, July 6, 2007

Tell Me About Computers

by: Khal Nuwar

So you want to know about computers? The most important thing to remember, if you are new to computers, is not to be afraid of them. They are just another electrical appliance, and unless you do something drastic, I promise you they wont blow up in your face!

Trial and error is your best friend when learning about computers and a few tips on computer jargon will help a long way. I love jargon it is so creative- who on earth thought of "cookies"?

Lets begin with some basics, computers were first created in 1936, and Microsoft Windows was born in the 1980s. Since then computers, or should I say programmers, have become smarter and smarter, creating the wonderful array of program freedom that we now enjoy in our daily lifestyle of computing.

So lets get ourselves up to date with the meaning of some of the computer jargon, or what some of the bits are, and what they do in your computer.

The Computer Case

Computer cases are the outside casing of your tower, or box in your computer, these cases come in an array of different sizes and styles. Many people have their cases in fancy styles as a way of expressing themselves, or expressing individuality.

Depending on what you choose, will determine the price you will pay for a computer case. Before grabbing the fanciest thing you find, remember to take some note of what's 'inside' the box, make sure that you choose a tower or box that suits your requirements as well as your individuality!


A CPU (Central Processing Unit) is the component in a computer that sends instructions and processes information in programs. Since the 1970s single chip processors have been the main type of processor. Usually when people refer to CPUs they mean microprocessors.

The Mother Board

There are many different designs of mother boards available but generally they all look pretty much the same and do the same job. To look at, a Mother Board looks a lot like a green piece of plastic with a heap of wires and chips attached to it. Take care of your Mother Board as it is very sensitive. It is advisable to take precaution before touching it as you may damage the onboard components.

Power Supply

The power supply unit in a desk top computer is a component that converts AC voltage to DC. The ATX style is the most common nowadays, although there are several different styles of power supply units available on the market.

Hard Disks

I find hard disks funny, as they aren't really disks in the olden days sense, originally hard disks were disks made of hard plastic, and floppy disks were floppy if you shook them. Now a Hard drive refers to the storing capacity inside of your computer. All your files are written onto the hard disk and stored there for future retrival or modification.

Floppy Drive

Floppy discs aren't floppy at all! They are infant hard plastic square things that used to be a popular way of transferring data from one computer to another. The computer reads and copies the information to a small metal circular disk set in the plastic casing.


ROM means, Read only memory, ROM chips are used in many appliances not just computers, a CD ROM is a device that stores information or data permanently. Once it's on there you cant delete it like you can with your hard drive or floppy disk unless you are using a CD-RW.

CD's are a standard means for distributing large amounts of information easily, since CD's are so cheap. You can use a CD ROM on your computer to create your own music and photo CD's.

Now you know some 'inside info' you should be ready to go and explore the endless options of computers and devices that are open to you, just remember- To err is human, to mess things up requires a computer!

Happy Computing!

source link

The Next Generation Of Computers Is Quantum Computers

by: Robert Michael

Taking the Quantum Leap

While it may seem that the evolution of computers is about at its end, that is not the case. The next generation of computers is quantum computers.

The reason behind continuing computer evolution is the continuing thirst we have for speed and capacity of our computers. Way back in 1947 an engineer and computing expert, Howard Aiken, predicted that all the United States need to satisfy its need for computers were six digital electronic computers. Other scientists and engineers that followed Aiken added to the volume they predicted as being adequately massive, but were also far too conservative.

What none were able to predict that scientific research would produce voluminous quantities of knowledge that needed to be computed and stored, nor did they predict the popularity of personal computers, and the existence of the Internet. In fact, it’s hard to predict if humankind will ever be satisfied with its computer power and volume.

A basic computer premise, called Moore’s Law, says that the number of a microprocessor’s transistors doubles every 18 months and will continue to do so. What this means is that by no later than 2030 the number of microprocessor circuits found in computers will be astronomically high. This will lead to the creation of quantum computers, whose design will use the power of molecules and atoms for processing and memory tasks. Quantum computers should be able to perform specific calculations billions of times more quickly than can the current computers that are based on silicon.

Quantum computers do exist today, though few and they’re all in the hands of scientists and scientific organizations. They are not for practical and common use – that is still many years away. The theory of quantum computers was developed in 1981 by Paul Benioff, a physicist with the Argonne National Laboratory. Benioff theorized going beyond the Turing Theory to a Turing machine with quantum capabilities.

Alan Turing created the Turing machine around 1935. This machine was made up of a tape whose length was unlimited and which he divided into small squares. Each square either held the symbol one or the symbol zero, or no symbol at all. He then created a reading-writing device that could read these zero and one symbols, which in turn gave these machines – the early computers – the instructions that initiated specific programs.

Benioff took this to the quantum level, saying that the reading-writing head and the tape would both exist in a quantum state. What this would mean is that those tape symbols one or zero could exist in a superposition that could be one and zero at the same time, or somewhere in between. Because of this the quantum Turing machine, in contrast to the standard Turing machine, could perform several calculations at once.

The standard Turing machine concept is what runs today’s silicon-based computers. In contrast, quantum computers encode computer information as quantum bits, called qubits. These qubits actually represent atoms that work together to act as a processor and as the computer’s memory. This ability to run multiple computations at one, and to contain several states at the same time, is what gives quantum computers the potential to be millions of times as powerful as today’s best supercomputers.

Quantum computers that have 30 qubits would, for example, have processing power equal to today’s computers that run at a speed of 10 teraflops (trillions of operations per second.) To put this in perspective, the typical computer of today runs at gigaflop speeds (billions of operations per second.

As our cry for more speed and more power from our computers continues, quantum computers are predicted to be a readily available product sometime in the not so distant future.

source link

Little Kids, Big Computers - The First Wired Generation

by: Leon Groom

The numbers are astounding. According to a eMarketer report released in 2005, 39% of children younger 11 years old and younger are online regularly. 73% of teens aged 12 to 17 are online consistently. 31% of all kids have a computer in their room. Back in 2002, comScore Media Metrix found that the 12 to 17 year olds spend almost half an hour instant messaging each other. This same survey reported this age group played online games almost 30 minutes a day, and were on other web sites for about three quarters of an hour. This all translates into a lot of time on the keyboard and in front of the monitor.

There’s trouble brewing with all this computer use. The problems surrounding this issue include:

· children using equipment designed for adults
· children using adult-sized computer furniture.
· children don’t know ergonomic computing techniques
· children are spending too much time on the computer

Kids as young as eight years old are complaining of headaches, neck aches, and back aches. If you look at any child using a computer, you can see immediately why this might be. Young children have to tilt their heads back to see a monitor that is towering above them. The rule of thumb is that the user should have to look down somewhat to see the focal point on the screen. The worst health statistics regarding children using computers are related to laptops. Because laptops aren’t adjustable in any way, they put the most stress on young users.

Child sized computer equipment is available nowadays. Keyboards are designed with little hands in mind. And the keys are color coded to help new readers pick out the correct letters. Keyboards for kids come with software that assists with reading and typing skills. And it’s not just keyboards that need to fit kids. Child sized mice fit small hands. To make the mouse even easier to use, they come with colored dots on the buttons to help youngsters who might be a little weak on telling left from right. (There are probably some adults who could benefit from that, too!)

Many adults aren’t familiar with proper computer use techniques, so it shouldn’t come as a surprise that children aren’t being taught how to compute ergonomically. Parents should become knowledgeable about what constitutes proper posture and typing technique. They should set an example for children by taking frequent breaks, stretching, and using appropriate assistive ergonomic devices. Parents should also be aware of how long their children are on the computer. It’s easy for anyone to get caught up and lose track of time. A kitchen timer set for twenty minutes is a great way to enforce limits.

Any time a child uses a computer, adjustments must be made to accommodate his size. And this doesn’t mean putting phone books on the chair. Children who use a computer frequently should have a child sized work station and equipment, or use the various adaptive devices to make computing comfortable. This is the time to protect the first wired generation from repetitive stress injuries. Five years from now is too late.

source link

How Computer Viruses Work and How to Protect your Computer

by: Matt Gundesen

Many people are afraid of tinkering with their computers because of the fear that they might inadvertently introduce a computer virus into the computer system.

Computer viruses have become the technological bogeyman that scares computer users all over the world. We have all heard of how dangerous computer viruses are and how it can damage your data. Of course, aided by the bloated images Hollywood movies paint with regards to computer viruses, a big majority now have this insane (but mostly unfounded) fears about it.

It is true that computer viruses are dangerous. Anyone who has lost vital information in their computers because of a computer virus will know how big a damage it can cause. But computer viruses are not these insidiously little pieces of code that could wreak havoc on the world. If you know what to do when you get a virus in your computer then you can definitely limit, if not totally stop, the damage it can cause.

But what is a computer virus? Well, it is a software with a small imprint that would usually attach itself on to a legitimate program or software. Every time this program is executed the virus is also executed and it tries to reproduce itself by attaching to other programs or it immediately starts affecting the computer. A computer virus and email virus basically have the same modus operandi, the difference though is that an email virus would attach itself to an email message or automatically send itself using the addresses in the address book in order to infect the people who receive the email.

A computer virus is usually embedded in a larger program, often a legitimate piece of software. The virus will be run when the legitimate software is executed. The computer virus would load itself into the memory of the computer and then it will seek out any programs where it can likely attach itself. When a likely program is found then the virus would modify the file in order to add the virus’ code to the program. The virus would usually run before the actual legitimate program runs. In fact, the virus would usually perform the infection first before it commands the legitimate program to run. This process is so fast that no one would even notice that a virus was executed. With two programs now infected (the original program and the first infected program), the same process would be repeated whenever either program is launched worsening the level of infection.

After the infection phase, or even within the middle of the process of infection, the virus would usually start its attack on the system. The level of attack can range from silly actions like flashing messages on the screen to actually erasing sensitive data.

Fortunately, there are steps that you can do in order to protect your computer from viruses. Among the steps that you can take are:

* The simplest way to avoid a virus is to install a legitimate and effective antivirus program in your computer. The antivirus program is designed to look out for any kind of activity that could be seen as similar to a virus attack or infestation and it automatically stops it.

* You can opt to use a more secure operating system in your computer. For example, Unix is a secure operating system because the security features built into it prevents a virus from actually doing what it is programmed to do.

* Enable Macro Virus Protection in all of the Microsoft applications resident in your computer. Additionally, you should avoid running macros in a document unless you have a good idea of what these macros are going to do.

* Avoid using programs that you have downloaded on the internet especially when they come from dubious sources.

* Never open an email attachment that contains an executable file – these are files with the EXE, COM and VBS extensions.

source link

Do You Need Computer Training?

by: Rick Boklage

The answer is yes, sooner or later you will need some computer training. As computers evolve and new software becomes available people are finding it important to keep up on their computer skills.

If you work in office environment for example, you may be faced with a situation where your employer purchases a new piece of software in the hopes that it will help his company become more efficient. As a result, you may be required to learn to use this new piece of software. Even if you work in a warehouse this new software may required that you need to enter inventory and print packing slips.

Do you need to know everything about the software? The answer is no, you by no means have to become a computer expert. Just acquire the skills that will allow you to use the software as efficiently as possible in your day to day activities. Here are some ways to gain those computer skills.

1. The software manual. Take some time to briefly read the titles and summaries. By doing so when you come across something you are not sure how to do you may think, "I remember reading something about that" and quickly find it again in the manual.

2. Software specific books. These are books you can purchase at most major book stores. They are quite often written by people who are experts with the software. The text is often followed by examples which may make it easier for you to understand and therefore you learn quicker.

3. CD tutorials. With some of the more popular software you may be able to purchase a CD tutorial. These training aids take you through step by step the different functions of the software. By "doing", as you are learning you are more often to remember these various functions.

It's never to late to start computer training. The skills you learn today may be all you need to get that promotion or qualify for that new job. Taking the initiative to upgrade you computer skills shows your employer or potential employer that you are able to adapt to the every changing computer workplace.

source link

The Top Ten Spyware And Adware Threats That Exists To Harm Your Computer

by: Karl Smith

If you have a computer, then chances are you have either spyware or adware lurking on it somewhere. Estimates suggest that 90% of computers are infected with malicious software (malware) of some type. They take over your computer, infest it with pop-ups and other junk, or even worse, steal your private information. Although there are countless types of spyware and adware, these are the ones generally considered the biggest threats to your computer.

Gator - This adware is designed to track the sites you surf and so get a picture of your online interests. Once it has enough data, it starts to bombard you with everything from banner ads to pop-ups on similar subjects, hoping to grab your interest and attention. Generally your computer gets infected with Gator when you share files through Kazaa or other means, or when you download freeware.

n-Case - Another type of adware, which subjects you to an endless flood of pop-up ads. Downloading freeware from online sites is the way your computer usually gets infected with this malware.

PurtyScan - This is particularly sneaky adware. First it displays a pop-up ad on your computer offering to detect any pornographic content on your computer, with an offer to remove it. If you click on the ad, however, you end up a website which then infiltrates your computer with even more spyware and adware.

Transponder - Similar to Gator, in that it 'watches' your online behavior, then bombards you with ads that this malware decides are relevant.

CoolWebSearch - This malware is certainly not cool, but malicious. The first step is that it hijacks your Internet settings and web page, and then redirects you to another web page of its own. As you can imagine, that website is full of more adware and spyware, waiting to get into your computer.

Internet Optimizer - You could almost call this piece of malware a hacker. It takes control of your home page, and also any other web pages you visit. The final step is to pass you on to its own web page, so that it can download other malware onto your computer.

Perfect KeyLogger - This malware is a tool for hackers. It can record your keystrokes, which makes it simple to find out private information such as credit card numbers, passwords and other private details. These are then passed on to the hacker.

ISTbar/Aupdate - Although it pretends to be a toolbar, this is still malware. It's a form of spyware, and it operates by continually displaying pornographic images and pop-up ads on your computer, which is both embarrassing and annoying. It also takes control of your Internet settings and web page.

TIBS Dialer - If you access the Internet via a dial-up connection, this malware will hijack your phone modem and then transfer you to various websites full of pornography.

Keen Value - This malware starts out by tracking your online behavior, analyzing every website you visit. It also collects your personal information if you fill out any forms online, and then bombards with endless advertisements, many of which link through to websites full of dangerous malware.

This is why protection is always important even if you surf the next just once a week.

source link

How To Build A Computer Without Really Trying

by: Darryl F. Griffin

I didn’t start out trying to build a computer. My CD disk drive stopped working and I kept getting an error message which said, “This drive is not available.” I went to the “my computer” file to check the status of the drive through the properties but the drive wasn’t listed. I then went to the device manager to check the drives status but it wasn’t listed there either.

Now I’m not a computer whiz or computer “geek” but I do have some knowledge, although limited, as to how a computer works. After spending literally hours trying to figure out what happened to my CD drive, I finally decided to cross the line and venture into the unknown. I took the cover off the computer case. I had no idea what I was looking for so I started tinkering around to see if maybe something had come loose. Sure enough, a cable going to the back of the CD player was unplugged. I plugged the cable back in, put the cover back on, plugged in the power cord and pushed the power button to fire her up. I was kind of surprised to see that it actually worked. This got me to wondering how hard it would be to build a computer from scratch. I went searching on line to see if this was possible, something that I could do. After reading various articles and visiting numerous web sites I decided to give it a try.

The first step was to determine what kind of computer I wanted. First and foremost it had to be fast. No sense going through all the trouble to build a dud. It also had to be capable of handling large video files, many photographs, and a vast amount of music (songs). And finally I wanted it to be a media center; capable of playing and recording music, playing and recording DVDs, downloading and playing games, and capable of playing cable television. I also wanted to be able to connect an overhead projector and have my wall as the screen while surfing the net. Once I decided what I wanted, I started looking for the components and or parts I would need.

The first thing I needed was a case, or tower. I found out that there are guide lines standardizing case such as ATX Form Factor. This is a standardized case designed to accept certain motherboards, and thus determining the layout of the inside of the case. I found a great source for the parts I would need, in a online store named Newegg.Com. Although I didn’t know it at the time, I quickly found out that in addition to a vast product selection, and very low competitive prices, their customer service was “top flight.” Yes I highly recommend these guys.

First on my list was a Rosewill R114A-SLV silver steel mid-tower computer case. This case came with a 400w ATX 20-pin main connector power supply. (See photo A). Next on the list was a motherboard. I needed an ATX Intel motherboard, (ATX meaning it would fit perfectly in my ATX mid-tower). For this I chose the ASUS P5P800 Socket T (LGA775) Intel 865PE ATX Intel motherboard. It is very powerful and affordable, and supports Intel’s Pentium 4 processor®. This processor supports Hyper-Threading technology which, according to Intel’s web site “results in more efficient use of processor resources, higher processing throughput, and improved performance on today’s multithreaded software.” This motherboard also comes with a 775 pin Land Grid Array (LGA-775) socket designed for the Intel® Pentium® 4 processor, and most importantly to me, a Users Guide.

Next on the list was a processor. I chose the Intel® Pentium® 4 processor; 530J 3.0 GHz, 800 MHz FSB in the 775 Land package. This super fast processor comes with a heat sink and fan assembly which uses push pin technology to install. For memory I decided on Rosewill’s 512 MB 184-pin DDR SDRAM DDR400 (PC3200), four times. For the hard drive I chose Western Digital’s WDC1600®, 160 GHz 7200 RPM Serial ATA Hard Drive. I chose a Mitsumi 1.44 MB 3.5 Internal Floppy, a Rosewill DVD burner; model RD-162, and Rosewill CD burner; model RR-52, (both retail). With the exception of the floppy, all of my components were retail, IE, in original manufacturers packaging, etc. Also on my list were 2 80mm Sleeve, Blue LED light case cooling fans, a Sound Blaster Live® sound card, an ATI All In Wonder 9600® 8X graphics/TV card, a Dell® 17” Ultra Sharp flat panel monitor, wireless mouse and keyboard, HP Photo Smart 7660® printer, and Logitech Z 2300® speaker system.

As you can see I did my homework. Before taking on this task, I didn’t have a clue as to what a motherboard was, what was a CPU’s function, what a hard disk was and what it was for, what if any compatibility issues I would encounter, and how all of this “stuff” worked together. So over the course of about three months I purchased all of these components. My very first mistake was that I ordered the ASUS P4P800SE instead of the ASUS P5P800SE. The P4P800SE is not compatible with the Intel P4 LGA-775 processor®. So here was a chance to test the service level of I emailed them and explained my situation, and without hesitation they exchanged the motherboard and didn’t charge me a restocking fee or freight. They acted as if they had made the mistake. Needless to say, I was VERY impressed.

Once all of the parts and components were here, I laid everything out, identified everything, and read the users guide that came with the motherboard from cover to cover. Now was the “nuts and bolts” time, the time to put this thing together. The motherboard came with ten screws and ten felt washers which were used to attach the motherboard to the case chassis. I placed the felt washers over the holes on the chassis and placed the motherboard on top of the felt washers. Therefore the felt washers were between the chassis and the motherboard. I then secured the motherboard to the chassis with the ten screws. Next was installing the CPU. I found out real fast how sensitive a piece of equipment this was. There are 775 tiny pins or connectors that could easily get bent, and thus make the CPU useless. This I considered the most intimidating. However, I said a prayer, took my time, got the CPU lined-up correctly, and proceeded with caution. Perfect match! Perfect fit! The sweating was over. I then installed the heat sink and fan assembly onto the CPU with the push pins, (push down and twist clockwise). I then plugged the CPU fan cable into the connector on the motherboard labeled CPU_FAN. I then installed the Serial ATA hard disk drive into one of the internal bays. Then I installed the floppy disk drive. Next to install was the system memory. It’s very important that you first “ground” yourself by touching the metal chassis before handling the Dual Inline Memory Modules (DIMM). This motherboard comes with four DIMM sockets enabling the use of various configurations, based on the amount of memory to be installed. I chose four 512MB DIMM modules, which kept it simple. I just unlocked the DIMM sockets by pressing the retaining clips outward. Next I aligned the DIMM on the socket so that the notch on the DIMM matched the break on the socket. By pushing straight down, I firmly inserted the DIMM until the retaining clips snapped back into place. I then installed the DVD optical drive in the first bay, and the CD optical drive in the second bay. This particular case has flip up doors which conceal the optical drives. I then installed a network card into one of the five PCI slots and secured it to the chassis with screws. Next I installed the ATI All In Wonder9600® graphics card into the Accelerated Graphics Port (AGP) slot. This motherboard only supports a 1.5v or 0.8v AGP card which is keyed to fit into the AGP card slot. Next, was time to set the “jumpers.” The jumpers are set to determine how a part of the computer will function. For example, there’s a three pin keyboard power jumper which lets you enable or disable the keyboard wake up feature. There’s a jumper cap that covers two of the three pins to determine the jumpers function.

Next came the fun part, the internal connections. I connected the FDD to the floppy disk connector with the FDD signal cable. Then I connected a power cable to the FDD. Next I connected the serial ATA hard disk drive to one of the two SATA connectors with a serial ATA signal cable, and then connected a power cable to the hard drive. I then plugged in the CPU fan connectors, the serial (COM) port module cable to the serial port connector, two USB 2.0 ports, and game module. I then connected power cables to the two optical drives. Next I connected the ATX power connectors (24-pin EATXPWR, 4-pin ATX12v), the internal audio connector (4-pin CD, AUX), front panel audio connector (10-1 pin FP-Audio), and last but not least, the system panel connector (20-1 pin Panel). The system panel connector is color coded so connecting it was fairly simple. I then replaced the system case cover, connected the monitor, the wireless receiver for the keyboard and mouse, the speakers, and the power cord. I then plugged the cord into a wall outlet. Now for the moment of truth! I pushed the power button. Nothing happened! No lights on system panel, no onboard LED light, no CPU fan running! Nothing! Needless to say I was crushed. All of this work for nothing. I started wondering what I could have done wrong, or was it some kind of compatibility issue. I went back to the beginning and retraced all of my connections, and they were all correct. After about an hour of tracing and retracing my steps it hit me. There was no power coming into the system! I then plugged in a lamp to test the outlet, and it worked fine. After going over everything again and again I realized that maybe, just maybe the felt washers were somehow preventing a connection. It was worth a try so I uninstalled everything, and I mean everything! I then took out the motherboard, removed the felt washers, replaced the motherboard, so that it was in direct contact with the case chassis, put the felt was hers on top of the motherboard, and then tightened the motherboard to the chassis with the ten screws. I realized that the only instruction not in the users guide was the correct placement of the felt washers! I then reinstalled everything. When I plugged the power cord into the wall outlet the onboard LED light came on! I pushed the power button and she came right on. It worked! Oh how happy I was. I then went into the BIOS (Basic Input/Output System) and followed the steps to set the many different parameters to control the operation of the computer. With the users guide, these were pretty easy to set. Once I finished setting the different parameters I installed the operating system, Microsoft Windows XP® with service pack II. Seeing the ASUS logo followed by the Windows XP logo was one of the most gratifying parts of this whole ordeal. I was thrilled! My computer worked flawlessly the first time I used it, and has worked flawlessly every since. I can watch TV programs, search the internet at super fast speeds (with cable modem), download songs and create a play list, create photo disk, and albums, listen to AOL radio, watch videos, watch DVD movies, play games, Call Of Duty, The Big Red One®, and print excellent photos. This computer is awesome! Overall this was a very intense learning experience. Once I was committed there was no turning back, because too much money had been spent. By the way, I already owned a desktop computer system made by a highly respected manufacturer. I paid close to $1,100 for the complete system and it has no where near the capabilities of the computer I just built. There is just no comparison. And I spent a whole lot less building my own, actually saved hundred of dollars. I priced major name brand computers with the power and capabilities of the one I just built and the cheapest came to about $2,200. Unbelievable! Just goes to show, anything is possible if you stick to it, and more importantly, if you have the Lord on you side…

source link