Contacts

How does a server computer work? Server or regular PC: which is better for hosting? There is no connection to the server, what should I do?

People often think of servers as huge machines that take up an entire room and are maintained by a large team of programmers. However, in reality, a server has a lot in common with a regular computer, especially if it serves a small company.

What is the difference between a server and a PC?

The first and most important difference is that the server is fault tolerant, serves many computers and has high performance. Ordinary Personal Computer assumes the work of one user at home or in an organization. Such technology must be powerful enough to quickly work with programs and provide high-quality image reproduction. However, a computer for the office can have minimal parameters that ensure stable operation necessary programs. A server is a computer that allows you to service all devices connected to it. To process requests as quickly as possible, the server station must have high performance.

Another difference between a server and a computer is the lack of a video card in the former. The monitor can be connected to an integrated video card, which is built into the motherboard.

The third difference is the special components of the server station. Most often, the server operates around the clock for a long time, so special cooling and power systems are needed that are resistant to overloads. In addition, the server needs special hard drives with a huge number of revolutions (10,000). All this makes the cost of components several times more expensive.

Unfortunately, not all managers know the difference between a regular work computer and a server. Often system administrator It’s quite difficult to explain to your boss why for normal performance corporate network You can’t pull out the old system unit from the closet, but you need to purchase a server from Dell, HP or any other well-known manufacturer.

Functionality

Despite the external similarity, the PC and the server have absolutely different tasks. The latter is necessary, first of all, in order to distribute resources office computers V local network. Yes, there is no difference in hardware. And if we are talking about a small number of machines, then these functions can be assigned to a standard PC. But even in this case, you need to install a special server version on it operating system and a number of special programs.

But as soon as the network grows, the load on the computer increases significantly. And in this case, whether you like it or not, you will have to buy a server, the price of which, by the way, is not so high.

Fundamental differences

First of all, it is necessary to note the more powerful “filling”, mainly motherboards. These computers must perform incomparably more processes than office machines. Again, due to heavy load, servers are equipped with multiple power supplies.

A high speed network connection is extremely important in their work. The functioning of the entire system depends on this. In addition, special software is resource intensive. That is why the standard amount of RAM for a server today is about 64 GB. And the capacity of hard drives amounts to hundreds of terabytes.

In a word, the server is very powerful computer, on which a special software. By and large, it performs only one function, but if it goes down, the entire network connected to it will stop working.

Making a request

How does a powerful computer differ from a server?


Not every modern user understands the difference between a server and an ordinary high-performance computer. Moreover, from year to year we have to deal with situations when a user purchases a server to play computer games and is indignant at why his favorite game on the server runs slower than his neighbor’s on a regular computer with a good video card. Here we will try to provide answers to such questions. .

For the first time, Russian users were faced with the need to use servers in everyday life due to the need to create file servers. File servers store files of a particular organization; most often, such servers are installed in computer stores. With the growth of the Internet, telecommunications servers emerged. This type servers is the most widespread today and is quite well represented, for example, by servers based on Intel solutions.

Telecommunication servers can be represented by web servers, ftp servers, mail servers etc. In connection with the development of the use of software in the form of 1C, which uses databases to store data, database servers have become widespread. Database servers only in large organizations are represented by modern server solutions, most often just an improved personal computer.

Terminal servers have become widespread. Terminal servers allow you to organize the work of several users under the control of one computer. This direction is quite promising for internet cafes. Owners have the opportunity to save money not only on system units, but also on the software under which the Internet cafe is supposed to operate. Excellent support for terminal mode is ensured by priority server adaptation to multi-threading.

Considering the fact that a modern server is an integral part of many business structures, the concept of cost of server ownership was introduced. The cost of owning a server includes not only the equipment, but also the server software and its maintenance. Unlike a high-performance computer, the main requirement for a server is fault tolerance. As a rule, a high-performance computer may not work for several days and its owner does not incur any significant losses, but downtime of an organization’s server leads to loss of time, downtime of the organization and, as a result, leads to significant financial losses. Therefore, such equipment is necessarily equipped with hot-swap circuits, power supplies are duplicated, and special monitoring tools are built in, which have received especially great development in.

Any modern server can have two types of downtime - planned and unplanned. Planned server downtime means carrying out preventive maintenance, modernization, transfer to another rack, etc. The key difference between planned and unplanned downtime is that the server owner is informed in advance that the server will be down for certain periods of time. The occurrence of unscheduled server downtime is largely due to the actions of the data center and its administrators; it is rarely associated with server hardware, because good server really very fault tolerant.

From all of the above, it follows that the use of expensive server components on various platforms can reduce the number and time of unscheduled downtime. This is precisely the difference between a high-performance computer and a server. The components of a high-performance computer are subject to the requirement of high performance at a moderate cost of the components, and the choice of server configuration is subject to the requirement of high fault tolerance, despite the cost of the components.

Unfortunately, unscheduled server downtime sometimes occurs due to hardware problems. Most often they are associated with poor quality of scheduled server maintenance. For example, failure of a server cooling fan can damage one or another of its components; unstable voltage in the network leads to failure of the power supply. Manufacturers of server equipment are aware of these problems and try to provide redundant redundancy for many components.

Question about differences between a server and a regular computer arises for any programmer or developer: sometimes in the form of simple interest, sometimes in the form of a practical task. It is a pity that many managers do not know the difference when trying to organize complex enterprise-level management systems based on office PCs. And after that they wonder for a long time why something is working “wrong”.

The server is, first of all, network computer, whose task is to distribute resources for ordinary computers on its network. If the network is small, then the server can be an ordinary PC. There is no difference between computers here, but there is a difference in software - the server uses a server version of the operating system, as well as additional services and programs, which are also called servers: mail, web, DHCP, etc. As the network grows, the power of the server must increase proportionally, and that is why you have to look for stores that sell server equipment. And you will definitely need it:

  • More powerful cases. Servers have significantly larger motherboard sizes due to the excess of connected interfaces and the number of processors.
  • More power supplies. Often 2-3 power supplies are used, and they can be hot-swapped. In general, server cases and power supplies are often placed in special racks, and “standard” plug-in units can dramatically increase server scalability.
  • High-speed network equipment. It is in the vicinity of the servers that the fastest cables and other interfaces are laid.
  • Hard drives, memory. Server programs are very voracious in terms of resource consumption, so disk memory here it is measured in tens and hundreds of terabytes, and the operational one is 32-64 or more gigabytes. And for servers RAM is released with error control - ECC, and is not suitable for PCs.

In general, the server's appetite grows depending on many factors. The price for a server increases much faster, so servers are often rented rather than purchased. Moreover, not every enterprise can afford to have professional staff for its setup and round-the-clock support, as well as the maintenance of a special room - a server room, where an ideal microclimate for the equipment must be provided.

Interestingly, the desires of many gamers to “play on the server” are impossible, since in games the key factor is graphics, and in servers, graphics, including monitors, are an unnecessary thing and are used only to monitor the state of the system. So players will have to make do with regular PCs with 2-3 processors, while server Easily uses hundreds of processors.

Very few publications write about servers and server hardware. AND main reason is technical complexity - there are many differences from ordinary consumer hardware, and a limited readership. Such articles are of interest only to administrators and those who make purchasing decisions, and to some enthusiastic readers who are interested in professional-grade hardware. However, server hardware is closer to desktop hardware than you think, and additional knowledge never hurts.

When people think of servers, they think of large computers, heavy boards and outrageous performance, but the reality is often different. Today there are many form factors and a huge amount of hardware and software, so it is difficult to come up with a universal definition of the word “server”.

Although professional and consumer hardware have many similarities, we believe that it is the emphasis on certain features and qualities that allows the hardware to be classified as professional grade. For example, your home PC should be fast, quiet, upgradable and, of course, reasonably priced. It will work for several years, and will often remain idle for several hours, and the user will have the opportunity to replace the failed hardware or simply remove accumulated dust. Other requirements are imposed on servers: reliability, 24/7 availability, and maintenance without stopping work come first.

First and most importantly, the server must be reliable. Be it a database server, file server, web server or other type of server, it must be very reliable, since your business depends on its operation. Secondly, the server must be always available, that is, hardware and software must be selected in such a way that downtime is minimal. Finally, prompt technical service is very critical in a professional environment. That is, if an administrator needs to perform a task, it must be performed as efficiently as possible without conflicting with the criteria mentioned above. This is why server performance is often the result of taking into account the necessary requirements and long-term strategies, and not the result of some kind of emotional step, as is often the case with gaming PCs.

In our article we will talk about server components and describe the technologies common to servers and consumer PCs, as well as talk about the differences and advantages. Since all professional-grade components are much more expensive than regular ones, we will begin our excursion with this question.

Professional means expensive

If you buy professional components or servers and workstations, you will quickly find that they cost more than regular consumer hardware. And the reason often lies not in some complex technology, but in the specifications of professional components, in their testing and validation. For example, the Core 2 Duo Conroe processor is very close to the Xeon Woodcrest in performance. But the differences lie in the sockets used, the specifications and the systems in which these processors are installed. Server hard disks are specifically designed for continuous operation 24/7, while desktop hard drives are not.

We usually assume that any consumer product is compatible with all others, which is not always the case, but most often. Therefore, you can replace one compatible component with another, and most likely there will be no problems. But this approach is no longer acceptable if you plan to upgrade the server or perform maintenance.

New products for the professional market are being developed with a predictable upgrade path in mind, as manufacturers want these products to work with existing systems, current and future generations of components. AMD and Intel customers regularly receive company roadmaps for their products, which provide a glimpse into the future. Consumers can buy a product with confidence that they will receive support and upgrade capabilities over time.

Warranty and replacement of components is also very important. If a broken desktop HDD Under warranty, any new model, then professional solutions often require exactly the same components. Therefore, the administrator needs to search for the exact same product, while regular users, on the contrary, will be unhappy if they do not receive the latest generation components (which, by the way, is cheaper for most manufacturers).

The magic word for the professional market is validation. When a game-changing product is about to be released, it will be reviewed and tested on popular hardware systems. The validation process ensures that companies can deliver highly complex systems to the enterprise market. Indeed, a business can only be built if the IT platform works flawlessly.


AMD Opteron (Socket 940), Intel Xeon Dempsey and Xeon Woodcrest (Socket 771): popular dual-core server processors.

Of course, you're probably familiar with the Athlon, Celeron, Core 2, and Sempron processor lines, which are desktop processors for home and office computers. But AMD and Intel have products aimed at professional customers: AMD Opteron, Intel Xeon and Itanium. The Opteron is built on the AMD64 architecture, like the Athlon 64 and Sempron processors, while the Xeon is based on Core architecture 2 or Pentium NetBurst, depending on the model.

Professional processors typically have more interfaces—multiple HyperTransports in the Opteron, two independent FSBs (one per processor) in the Intel world—and a richer set of features, which are often required for server applications and workstation software.

On the market you can find two different versions Opteron processors: one uses Socket 940 with DDR memory, the second is Socket 1207 (Socket F) and DDR2 RAM. As with all AMD64 processors, the memory controller is part of the processor, which is a significant advantage as the number of processors grows: not only do you get more memory controllers to accommodate more memory, but each processor runs its own block of memory. Of course, this raises coherence problems and increases the complexity of multiprocessor systems, but the overall throughput also turns out to be higher. Opterons for Socket 940 Opterons use PGA packaging, that is, the legs are on the processor. Opteron for Socket 1207 switched to LGA packaging, where the legs are on the socket and flat contacts are on the processor.

These days, dual-core processors should be your choice. Dual-core processors, even with smaller ones clock frequency, outperform single-core models in the server market. Dual-core Opterons for Socket 940 are built on Egypt and Italy cores; the latter version is more advanced. But today we recommend choosing models for Socket 1207 (Socket F), thanks to support for DDR2 memory and the ability to upgrade to quad-core processors, which will appear sometime this year.


AMD's current 1207-pin Socket F is suitable for current dual-core and future quad-core Opteron processors.

Intel Xeon processors are available in different types, and previous versions used Socket 604. Modern platforms are based on Socket 771, which is an LGA socket. There are different Intel Xeon processors, but we recommend sticking only with dual-core models. In the table http://www.intel.com/products/processor_number/chart/xeon.htm there is full list processors.

Models from 5030 to 5080 are manufactured using a 90nm process technology and are based on the now outdated NetBurst architecture. We recommend Woodcrest-based Xeon processors, with model numbers ranging from 5110 (1.6 GHz) to 5160 (3.0 GHz). They are produced using 65nm technology, require less energy, but provide high performance. The E53xx line is built on quad-core processors Clovertown with frequencies from 1.6 to 2.66 GHz.

Xeon processors do not have an integrated memory controller. Instead, they rely on the motherboard's quad-channel DDR2-667 memory controller. To provide sufficient throughput for dual or quad core processors, modern platform Socket 771 (Blackford) provides two independent FSBs (DIBs), one for each processor.


Intel is the first manufacturer to introduce quad-core processors. Clovertown is assembled from two Woodcrest dual-core dies placed in one package.


Intel Xeon Dempsey (65nm NetBurst), Woodcrest (65nm dual-core Core 2) and Clovertown (65nm quad-core Core 2).

Server memory works on the same principle as regular memory for consumer PCs. The modern standard is DDR2 memory (Double Data Rate SDRAM of the second generation). DDR2 works with more prefetch buffers (4 instead of 2), so the interface frequency can be doubled compared to DDR1.

Compared to consumer memory, professional memory has two different mechanisms designed to preserve data integrity. The register memory contains a small chip, the so-called "register", which is responsible for updating the signal. If the memory of a regular PC cannot consist of more than four (or sometimes six) DIMMs - the signals pass through all memory modules and attenuate, then register memory easily allows the installation of eight modules. In addition to the register, DDR2 memory contains on-chip termination, which prevents signal reflection.

The second mechanism is the ECC error correction code. Instead of storing the standard 64 bits, an ECC DIMM channel adds another memory chip that can store another 8 bits, allowing data to be recovered. Therefore, single-bit errors can be corrected on the fly.

All AMD processors Opterons for Socket 940 require DDR333/DDR400 registered memory, while the Socket F generation (Socket 1207) requires DDR2-667 registered memory.

Fully-Buffered DIMMs (FB-DIMMs) use a so-called buffer component, a high-power chip that converts parallel signals into a serial interface. Its main purpose is to connect more than eight memory modules to the controller. With Intel's quad-channel DDR2 memory controller, you can install eight 2GB DIMMs on each of the four channels, if motherboard manufacturers want to support that configuration.

FB-DIMMs are more expensive, run hotter, and are no faster than regular registered memory. Yes, they are most likely the future of servers with large amounts of memory; the same technology is used for current Intel platforms Xeon.


Click on the picture to enlarge.

As an example, we took the server motherboard Asus board P5MT (it is used in entry-level servers because it allows you to use regular processors rather than more expensive server processors). Server motherboards do not support overclocking and are usually equipped with a large number of interfaces, as well as expansion slots with high bandwidth.

The 133 MHz PCI-X bus continues to be the dominant interface for expansion cards. It is built on the parallel PCI bus, which is found in almost every PC today. PCI-X is 64 bits wide, while the PCI bus in your computer is 32 bits. PCI-X 133 supports bandwidth up to 533 MB/s. However, it should be remembered that the PCI-X controller bandwidth is distributed among all connected devices.

The PCI Express (PCIe) interface is more modern. PCI Express is a serial interface that uses multiple lines to connect a device to a controller. Professional expansion cards use PCIe x4 slots (four lanes), but there are also x1, x8 and x16 PCIe cards/slots. PCIe x16 is typically used for high-end graphics cards; graphics workstations carry two full PCIe x16 slots for two graphics cards.

Motherboards for servers and workstations usually contain a built-in network controller. It can be built on the same components found in consumer-grade motherboards, but usually includes more powerful chips that provide, for example, hardware support for TCP/IP computing or other functions to increase performance.

This board is equipped with four DDR2 memory slots, one Socket 775 connector for installing a Pentium 4 or Core 2 processor, one 32-bit PCI slot, one PCI Express x16 slot for a video card or a powerful storage controller, as well as two PCI-X 133 slots. Two Broadcom gigabit Ethernet controllers are responsible for networking capabilities. Installed on the motherboard GPU ATi. It is of course outdated, but it is sufficient to display the desktop or command line, which is what is required for server operating systems.

All other interfaces and components are also found on consumer hardware: south bridge, UltraATA/100 or Serial ATA controllers, voltage regulators, etc. The significant difference, again, lies in the validation process, during which manufacturers test how their products work with others and publish compatibility lists.


The ATi RageXL chip is many years old and doesn't support 3D graphics, but it's good enough for servers. Moreover, most of the time no one looks at the screen there.

A little higher we already mentioned a motherboard with an integrated video card. All server motherboards are equipped with a very simple graphics processor with a small amount of dedicated memory - solutions that take memory from the RAM are not popular here. The successor to RageXL today can be considered the ATi ES1000 graphics processor, which initially worked in the consumer market, but then appeared in servers due to improvements in hardware and drivers. Administrators don't even need to think about installing a special or updated version of the driver: the driver comes with the OS and is certified.

Workstations, on the other hand, require more powerful hardware. ATi is positioning itself for this market graphics accelerators FireGL, built on the Radeon X1000 line. Nvidia offers the Quadro FX line, which is very close to the GeForce 7000 family. The differences between consumer and professional chips can be small, for example in driver optimization. Professional graphics cards provide excellent performance in specialized applications, but they also cost much more.

Hard drives are another interesting aspect regarding servers and workstations. A few years ago, server hard drives used Small Computer System Interface (SCSI) and spindle speeds of 10,000 or 15,000 rpm, which significantly outperformed desktop drives with a speed of 7,200 rpm. Server hard drives are still faster, although the difference is not so great.

Market professional hard drives divided into three segments. The first high-capacity segment uses conventional 3.5" Serial ATA hard drives, validated for 24/7 operation. The performance segment is trying to maximize data storage density, which is why we are seeing the emergence of more more 2.5" high-performance 10,000 rpm hard drives with Serial Attached SCSI (SAS) interface. The high-performance segment relies on 15,000 rpm SCSI or SAS hard drives.

Server and workstation hard drives typically require active cooling because they are optimized for maximum reliability and performance. All professional hard drives come with a five-year warranty.

Power supplies for the professional sector are specially designed with maximum reliability in mind. Any decent power supply can correct the consequences of one missing phase, but professional solutions can handle more serious failures. Some also provide surge protection, although here we get overlap with the area that lies under the responsibility of uninterruptible power systems (UPS).

Professional power supplies are modular and provide redundancy in the form of two modules, each of which is capable of providing sufficient power to the system. If one power supply fails, the system will continue to operate from the second unit.

Did you like the article? Share it