• Aucun résultat trouvé

COMPUTERS AND NETWORKING 1. Purchasing a nuclear medicine computer

Dans le document Nuclear Medicine Resources Manual | IAEA (Page 162-168)

ESTABLISHING NUCLEAR MEDICINE SERVICES

BIBLIOGRAPHY TO CHAPTER 3

4.6. COMPUTERS AND NETWORKING 1. Purchasing a nuclear medicine computer

Computers have been central to the practice of nuclear medicine for many years, particularly as the extraction of functional information commonly necessitates image analysis. Computers form an integral part of imaging equipment, providing on-line acquisition and data correction to improve instrument performance, essential functions such as tomographic recon-struction and flexible display of images. As computer speed increases exponen-tially, and with memory and disk capacity showing similar growth, the capacity of the computer to tackle more complex and challenging tasks in a clinically acceptable time increases. Patient throughput and efficiency of operation are greatly aided by the computer tools available.

In recent years most vendors of nuclear medicine imaging equipment have moved away from proprietary computer designs towards general purpose computer platforms such as the IBM PC, the Macintosh and various desktop workstations running the UNIX operating system. By adopting these relatively highly developed and widely used computer systems, for which numerous hardware and software options are available, the vendors are now able to offer support for industry standards in several important areas, including networking.

When purchasing nuclear medicine equipment it is usual to include a computer supplied from the same manufacturer, although there are instances where computers may be purchased separately. Choice of equipment should be based on the criteria outlined in earlier sections, with choice of computer being secondary to general considerations such as the amount of support available.

Since computers are increasing in performance so rapidly, the main problem with them is that they have a much shorter life than that of the associated imaging equipment. The limitation therefore is in the ability to upgrade systems so that software and new features are available. Continuation of a software support contract is advisable as this will normally include any improvements, fixes of any bugs and new releases for a reasonable time.

However, at some stage, hardware will also need to be upgraded at the customer’s cost to permit operation of the current software. The adoption of industry standards by manufacturers has reduced the problem of hardware

support, although most suppliers will still utilize customized hardware for special functions (e.g. data acquisition).

When specifying requirements it is important to define the expected functionality rather than the technical specifications. Acceptance testing can then be based on the capability to provide results for a specified clinical analysis in an acceptably short time. Comparison of the time for processing various clinical studies is a useful indicator of system performance, independent of underlying technical specifications. Many manufacturers now offer package software that is common across multiple manufacturers (e.g.

cardiac SPECT analysis). The software supplied sometimes requires an exact acquisition protocol, otherwise its application may be invalid. Every effort should be made to ensure that the software purchased is validated in-house.

Software phantoms and test data sets available via the Internet may assist in the validation of some programs.

When purchasing computers the following factors should be carefully considered:

(a) Advice should be sought regarding the version of the operating system used. System software tends to be fairly stable but major changes occasionally occur that may limit the availability of future releases. Care should be taken to avoid manufacturers whose system software lags well behind the current release (as available directly from the system software supplier, e.g. Sun or Microsoft).

(b) Although most systems are supplied with all the clinical functionality that can be envisaged, it is important that there is some ability to customize protocols to match individual requirements. In particular, the ability to easily add simple programs is important to permit flexibility in use. In most cases some level of programmability is available suited to a non-expert user.

(c) Training in the use of the computer is as important as training in the use of the equipment itself. Ensure that adequate provision for training is included with the equipment purchase.

(d) The connectivity of the system to other equipment in the department or institution is an important consideration which may require additional hardware and software to be purchased. This is further addressed in the next section.

(e) Choice of system may be based on personal preference for a particular user interface and speed of response rather than choice of options.

4.6. COMPUTERS AND NETWORKING

4.6.2. An introduction to networking

An important development in computing has been the ability to connect computers via a network so that there can be communication and data transfer between systems. A network that extends over a limited geographical area (e.g.

a single hospital department) is called a local area network (LAN). However, networking is not limited to a department but permits communication between computers in different institutions, even if these are located in different countries, via the Internet and the World Wide Web. A brief overview of the components of a typical network is provided below in order to ensure familiarity with some of the jargon used. The most important applications of networks in nuclear medicine are:

(a) To permit interconnection of imaging equipment in a department or hospital;

(b) To permit transfer of images for reporting or provision of an opinion remote from the site of data acquisition;

(c) To permit access to educational information, technical advice or software.

Networking is made possible by the adoption of a set of standards that define how information can be sent via electrical signals in a cable and deciphered by a computer interfaced to the cable. The underlying standard model for networking as defined by the International Standards Organisation (ISO) is the Open Systems Interconnect Model (OSI). Networking therefore uses specialized hardware involved in interconnecting computers and the software used to interpret or translate the information transferred.

By far the most common means of connecting computers in LANs is the Ethernet, a standard that defines both the protocol for data transfer and the cable used. It uses a method called carrier-sense multiple access with collision detect (CSMA/CD) to share a common cable among several computers.

CSMA/CD operates as follows. When it is desired to transmit data from one computer to another computer on the network, the computer first ‘listens’ to determine whether the cable is in use. If the cable is in use, the computer waits for a random period of time before listening again. If the cable is not in use, the computer transmits, and at the same time listens to ensure that another computer did not commence transmitting at the same time. When two or more computers attempt to transmit at the same time, a ‘collision’ is said to occur. All the computers involved detect the collision and each waits for a random period of time before trying again. Until recently Ethernet networks operated at a transmission rate of 10 megabits per second (Mb/s). However, the fast Ethernet protocol, which runs at 100 Mb/s on standard, high quality (CAT5) Ethernet

cables, is gaining rapid acceptance, and hardware also exists for gigabit transfer rates, usually via optical fibres. Alternative approaches to the Ethernet exist for transfer via optical fibre (fibre distributed data interface (FDDI)) and for transfer (usually fast) via other media (e.g. asynchronous transfer mode (ATM) for use with microwaves). However, these are beyond the scope of this overview.

Hardware linking computers involves a network interface card (NIC) on each connected device and various ‘boxes’ that interconnect cables or control the flow of traffic, limiting connection to specific Internet addresses. Lack of this traffic control would result in all computers worldwide receiving communi-cation from all others, clearly an impossible situation. Examples of network devices are hubs that simply connect cables without any attempt to alter traffic flow, switches that permit interconnection between cables with transfer rate maintained and routers that simply direct or filter traffic. The simplest network in nuclear medicine involves direct connection between two machines, with appropriate software handling the network communication (usually with one machine acting as a server that effectively takes control of the network). Where more devices and interconnection to larger networks are involved, additional devices are necessary.

Another important networking standard which, like the Ethernet, has been implemented on all commonly used computer platforms is Transmission Control Protocol/Internet Protocol (TCP/IP). Although other protocols exist (e.g. Internetwork Packet Exchange (IPX) and Netbios Extended User Interface (NetBEUI)), TCP/IP is by far the most widely used and forms the basis for worldwide communication via the Internet. The term Internet dates back to the earliest days of TCP/IP in the early 1980s, when it was used to describe any network built using IP. Since that time, the power and flexibility of TCP/IP have resulted in the creation and explosive growth of the Internet, a worldwide network of networks linked by TCP/IP.

TCP/IP is a set of protocols designed to facilitate the interconnection of dissimilar computer systems, and it makes no assumptions about the nature of the connection between the computers. TCP/IP provides computer users with a number of useful services, and new services are being added regularly. Until the early 1990s, most Internet usage involved three TCP/IP protocols — Simple Mail Transfer Protocol (SMTP), Terminal Emulation Program for TCP/IP Networks (TELNET) and File Transfer Protocol (FTP). SMTP handles electronic mail delivery. With TELNET, an interactive session can be established with another computer connected to the Internet. FTP facilitates file transfer between computers. The network file system (NFS) is another useful TCP/IP protocol, which allows computers connected to the same LAN to share files.

4.6. COMPUTERS AND NETWORKING

The advent of the TCP/IP protocol and, later in the early 1990s, the HyperText Transfer Protocol (HTTP), brought about the creation of the World Wide Web. The World Wide Web is made up of millions of documents containing many sources of information (including text, graphics, sound and video) that are stored on computers called web servers. A web browser program such as Netscape is needed to fetch documents from a web server and display them with links appearing as highlighted text (hypertext). Clicking on a link fetches and displays the hypertext document addressed by the link. Web browsers provide users with a much friendlier, more intuitive interface to Internet resources than do FTP or TELNET.

4.6.3. Connecting imaging equipment via computer networks

Section 4.6.2 refers to general networking details that are common to all networks and that are not specific to nuclear medicine. In medical imaging, the transfer of image data between computers is common, either within nuclear medicine or between different imaging modalities. The networking of computers in this environment, with the possibility of circulating images for reporting purposes and review and central archiving, is commonly referred to as a Picture Archiving and Communication System (PACS). If fully imple-mented, this will link closely with both a Radiology Information System (RIS) and a Hospital Information System (HIS), both of which handle the adminis-trative aspects of diagnostic imaging. Interconnection of remotely sited imaging modalities via the Internet (rather than a LAN) is usually referred to as teleradiology or telenuclear medicine. In this case the objective is usually to transfer images for remote reporting.

It should be recognized that medical images usually occupy fairly large files and therefore the speed of transfer remains a major concern, even with modern technology, particularly if there are many file transfers occurring. In a PACS system, direct high speed cabling can be arranged between critical components, usually dedicated to this purpose. In contrast, teleradiology is dependent on the slowest link between centres, which may be a modem connected to a telephone. Table 4.4 illustrates some typical figures for files encountered in diagnostic imaging and the time required for transfer via different media; clearly, nuclear medicine poses no problem compared with some other modalities.

In order to optimize speed for medical image transfer, images can be compressed by methods that may lose some information (lossy or irreversible methods) or preferably by lossless or reversible methods. Reversible methods are capable of retrieving exactly the same image data as originally compressed and are essential for most diagnostic reporting, whereas irreversible methods

may still be acceptable in some non-diagnostic situations. The most popular currently used approach was developed by the Joint Photographic Experts Group (JPEG), with lossless or lossy compression achieved depending on the degree of compression used. A popular alternative involves wavelet compression. For lossless compression, reduction in file length, savings in storage space or improvement in transmission speed by a factor of 2–3 is possible; for lossy compression the factor can be much higher (e.g. 10–20), depending on the modality.

Even with established networks that permit image transfer between systems, there remains a further obstacle to establishing a workable system.

Most suppliers of medical imaging equipment have developed their own database structure for images, with a proprietary format for the structure of image files. After transferring a file to a second system, translation between file formats is necessary and is not a trivial step. Certain file formats have become useful as intermediaries between individual manufacturer’s equipment.

Examples are Interfile, which was developed specifically for nuclear medicine and includes a readable header that is easily edited, or the file structure developed initially by the American College of Radiologists and NEMA (ACR–NEMA). Fortunately, more sophisticated standards have been established that define not only the file format but also the method to be used to establish communication between systems. The result is DICOM (Digital Imaging Communication in Medicine) which extends to ACR–NEMA but adheres to the more general networking standards summarized earlier in this section. Even with this standard available to link computers there are frequently incompatibilities in different manufacturers’ implementations of DICOM conversion, resulting in missing administrative data or, in the worst case, inability to transfer data. Most suppliers are actively improving their DICOM software as they now recognize the importance of connectivity. Useful tools for checking DICOM conformance are readily available.

TABLE 4.4. SOME TYPICAL FIGURES OF DIFFERENT DIAGNOSTIC IMAGING FILES AND RATES OF TRANSFER VIA DIFFERENT MEDIA Study Matrix size Number

of images

File size (Mb)

Time via

100 Mb/s Time via 32 kb/s Mammography 4096 × 5120 × 12 4 125 10 s 31250 s = 8.7 h Functional MRI 256 × 256 × 12 300 30 2.4 s 7500 s = 2.1 h

CT 512 × 512 × 12 30 12 1 s 3000 s = 50 min

Nuclear medicine 128 × 128 × 8 24 0.4 0.03 s 25 s

Dans le document Nuclear Medicine Resources Manual | IAEA (Page 162-168)