Posted at 11.04.2018
This document is comprised of two chapters, one each for just two case studies given in the task. First chapter defines the diskless workstations and introduces the characteristics of diskless workstations. In addition, it introduces the choices available in the thin consumer market. First chapter also discusses how to choose an operating system for the diskless workstations. Later in the section, a discussion is manufactured on interdependence among workstation and network hardware, as it pertains to employing diskless workstations in the network.
Chapter two is devoted for multi-vendor network theory/ Strategy. First, it discusses the professionals and cons of multi-vendor networking systems. Then it evaluates the impact of current network technology and requirements. Furthermore the chapter contains a discussion on how network protocols help in multi-vendor sites. In the ultimate part of the record the role of the software and hardware components as well as the suggestions for selecting server types for multi-vendor sites is also talked about.
Diskless workstation is your personal computer system without drive drives installed locally; therefore booting it's operating-system from a server in the local area network. Sometimes when a computer system is having a drive drive but do not deploying it, that system is also called a diskless workstation. Diskless Workstations provide less costly but better networking solutions for corporations.
Characteristics of diskless workstations are,
The operating-system is packed from the server when booting up. Naturally all the other software resides in the server. (Firmware is installed on the diskless workstations itself to start the boot process)
Processing is performed in the diskless workstations, not in the server. In some implementations, processing is also done on the server and the ones diskless workstations are actually called "Thin clients".
Both the natural data and refined data are stored in the server. Diskless workstation fetches them when needed.
Choices available on the market
Conventional Diskless workstations with lower processing power and storage area. E. g. Dell Wyse R Thin client
High performance Slim clients like Horsepower t510 Flexible Thin Client
Almost all the Linux flavours such as Ubuntu, openSUSE and etc. helps network booting and therefore can be installed on our centralized server. Also or windows 7, Vista, 7 and 8 facilitates booting over the neighborhood area network and for that reason those os's may also be deployed. There are user friendly third party software available in the internet to accomplish easy deployment of diskless workstations in the business networks.
When choosing an operating system for the diskless workstations pursuing facts can be considered.
Linux Os's and the program are totally free, whereas windows operating systems cost a huge selection of US us dollars (Agrawal et al, 2005).
Linux supports many more processor types and architectures than home windows does.
Since Linux is open up source, a skilled IT administrator can transform the behaviour of operating system as needed.
Linux is extremely stable. It provides an attribute called memory safety which prevents a crashed application from crashing the whole system (Agrawal et al, 2005).
Linux offer more security than windows does. Linux does not have viruses and malwares as home windows and then the server can operate widely without a hazard to its operating-system or data stored in it.
Linux outperforms glass windows when it comes to multi customer workstations. But sometimes Linux is more "resource hungry" than other workstations (Agrawal et al, 2005).
Both Linux and Windows supports multi-tasking.
Diskless workstations have their os's in the server. When workstation needs to use a network component like a computer printer, the server will have to talk to that network aspect. I. e. server has to communicate on behalf of all the diskless workstations. This can lead to congestions and upsurge in traffic.
Also, all the network computers are using the server hard disk, CPU, Recollection and etc. workstation (Client) hardware must hang on until server hardware supplies the data it wanted. So the bottom line is deploying diskless workstations/ Network computers in the network will boost the Interdependence of workstation hardware. It is therefore very important to install reliable hardware and software components in the server, and use backup techniques and redundancy techniques for the server.
Network topologies characterize how network elements (Nodes) are interconnected to one another in a network. You can find four standard network topologies to be revealed. (Tanenbaum 2006)
1) Bus topology
2) Ring topology
3) Superstar topology
4) Mesh topology
All the nodes are connected to an individual wire called a Bus.
1) Simple to implement
2) Requires less cable tv length, and for that reason it is cheaper
3) If a node (Computer) fails, that will not affect others
1) Suitable only for sites with few computers (Lowe 2008)
2) If the cable tv breaks from a point, complete network will fail
Network nodes are linked as a band. When two nodes are conversing, data must travel through all the intermediate nodes (Lowe 2008)
1) An easy task to implement
2) An easy task to troubleshoot
1) In case a node fails, whole network will fail
Each and every computer is connected to a hub or change.
1) Centralized nature gives simplicity (An easy task to troubleshoot) (Lowe 2008)
2) In case a node (Computer) fails, that does not affect others
1) If the hub fails, entire network fails
2) Require more wire lengths
Each and every node is linked to each other
1) Offers redundancy
2) An easy task to troubleshoot
3) Multiple conversations may take place at same time
3) Waste material of resources
4) Require more cable lengths and therefore expensive
Network computer was actually a hallmark of Sun Microsystems for his or her diskless workstations. Later this term was used for all the diskless workstations. Thin consumer is also a diskless workstation, but unlike diskless workstation, slim client will the handling on the server.
For network personal computers and slender clients, mesh topology is not appropriate. In mesh topology all the customers are connected with each other, but these cable connections are useless. Because the server is linked to all the customers and server has the files and refined data, it can directly transfer those to the desired vacation spots. Bus topology and wedding ring topology are too high-risk and it is also a throw away of resources. If the network has the superstar topology; i. e. every single diskless node is connected to the centralized server using a dedicated route the resources will be used in an successful manner.
One can think, if the data are stored over a distant server, then to access those data from the slim client will need more time when compared to a normal workstation does. Also in one particular implementation of your thin consumer all the control is done in the server. But thanks to the modern LAN technologies that is not a problem by any means. Gigabit Ethernet provides 10-100 gigabits per second data rates within the LAN.
Also in the last 10 years hard disk drives and processors advanced a lot offering greater speeds, memory space capacities and performance to the network. Since all the customers are keeping their data on central server, server needs to have high capacity hard disks with higher gain access to rates of speed. Also the technology has become cheaper over the time. These facts really help the evolvement of network computer idea.
It is evident a network with diskless workstations/ Network computer systems has much data to be transferred back and forth between your server and itself than a network with normal PC workstations. So you will see more traffic in the network and almost on a regular basis server will be reached by many client workstations. This can lead to collisions and collisions will activate retries from the clients and that will also add up to the network traffic, thus making exponential growth of the traffic. Therefore an extraordinary multiple access protocol is necessary for the network, in order to effectively utilizes the real strength of network computer systems. TCP/IP process stack provides a powerful multiple access technology in its data link layer.
Ethernet, fast Ethernet and Gigabit Ethernet are a few of the main physical covering protocols that enable fast communication between network computer systems.
When a network evolves with the time the enterprise wants to purchase more equipment for the network. But by now there could be cheaper products on the market, from other distributors than your original vendor. So multi-vendor sites can save primary cost for the evolved network. Also when a new technology is launched by way of a different supplier, that helps you to save time and cost, it is good to buy those equipments than sticking with the same merchant.
Different distributors have different settings changes, different consumer interfaces different conditions and etc. Therefore working in a multi-vendor system is a harder job and requires more expertise and experience. Also it will require training programs for existing specialists and it may demand more IT experts for the company.
If we consider celebrity topology, each node is connected to the hub or swap. Therefore at most only two nodes of different suppliers will be interacting with the other person actually. But if we take mesh topology, the situation differs. Each node is connected with every other node in the network. Therefore a machine built by a particular vendor must communicate with a lot more machines created by different distributors.
New network os's are compatible with one another. Services are designed in to those os's and for that reason they can co-exist after little or no construction changes are done.
Network protocols are standardized by IEEE to maintain consistency in network devices and procedures. This helps the multi-vendor network conditions to grow popularity.
Different network components may have different hardware and/or software features. They might be created by different distributors. But by the end of your day, a network administrator must be able to connect all those network components with the other person and build a working network. This is achieved by the use of network protocols. (Lammle 2007)
Generally todays multi-vendor systems use TCP/IP process stack which comprises of five levels. A coating normally has two interfaces with the immediate bottom level part and the immediate top part. Each layer offers a group of functions to the level above, and depends on the functions of the layer below (Kozierok 2005). Interface at the top will clearly identify the services that are available from that layer. And User interface on underneath will clearly identify the assistance that required from the immediate bottom covering (Kozierok 2005).
So, so long as network components manufacturers stick to this layered process architectures, it does not matter how the hardware work, what exactly are the hardware and software features inside and etc.
Different distributors will apply the same process using different hardware items with different performance. Even though the hardware is vendor specific, sometimes same software can be installed on them and then the customer will have the same interfaces and that will hide the intricacy induced by the multi-vendor network for a few magnitude. But sometimes the vendor itself develops the software that runs on its hardware and that will improve the overhead of remembering configuration options and menu items for different suppliers. So in a multi-vendor network environment the job of the hardware would be to perform the duty in a distinctive way with its available hardware potato chips and processing forces. The work of the program is to regulate the unique hardware as needed but showing common configurations configurations and interfaces to an individual.
When selecting a server for a multi-vendor network environment, the IT administrator must look at the sellers that are in the network. Some sellers are interoperable although some are not. Server may be used to make communication possible among those non interoperable distributors and that approach is named Server Interoperability. This is accomplished by putting in communication services on the server instead of the other methodology where software are installed on the customers to make communication compatible. In this manner we can connect an Apple Macintosh consumer to a Home windows network environment. Microsoft Windows provides software that facilitates network services for Apple Macintosh and Linux clients. Some modern machines have these services built into them, so that the network administrator does not have to worry about any of it.
From this assignment I could sharpen my knowledge on Diskless workstations and thin clients. I determined the characteristics of diskless workstations and the options in the marketplace; both in hardware aspect and in software aspect. I reviewed about the network os's available for the diskless workstations and also about the interdependence of workstation hardware in the context of systems with diskless workstations.
Also, in order to provide answers for process 2, I analyzed about advantages and dis advantages of multi-vendor network Strategy. I QUICKLY discussed the impact of multi-vendor network Strategy on current network technology and criteria. Also I studied about how precisely network protocols permit machines of different vendors' coexist in the same network. Also I did so a small research about selecting a server for a multi-vendor network environment and about the role of software and hardware in a multi-vendor network. That really was helpful for me and the results were launched in the last mentioned part of the assignment.