• Aucun résultat trouvé

"ROUND" TABLE ON ACCELERATORS -"THE ACCELERATOR AFTER NEXT"

N/A
N/A
Protected

Academic year: 2021

Partager ""ROUND" TABLE ON ACCELERATORS -"THE ACCELERATOR AFTER NEXT""

Copied!
25
0
0

Texte intégral

(1)

HAL Id: jpa-00221934

https://hal.archives-ouvertes.fr/jpa-00221934

Submitted on 1 Jan 1982

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

”ROUND” TABLE ON ACCELERATORS -”THE ACCELERATOR AFTER NEXT”

L. Lederman, J. Adams, R. Palmer, B. Richter, A. Salam, G. Voss, V. Yarba

To cite this version:

L. Lederman, J. Adams, R. Palmer, B. Richter, A. Salam, et al.. ”ROUND” TABLE ON ACCELER-

ATORS -”THE ACCELERATOR AFTER NEXT”. Journal de Physique Colloques, 1982, 43 (C3),

pp.C3-607-C3-630. �10.1051/jphyscol:1982379�. �jpa-00221934�

(2)

JOURNAL DE PHYSIQUE

CoZZoque C3, suppZ6ment au n o 12, Tome 43, ddcembre 1982 page C3-607

Organized by L. Lederman

Fermi Laboratory, P.O. Box 500, Batavia, I 1 60510, U.S.A.

with the participation of J. Adams, R. Palmer, B. Richter, A. Salam, G. Voss and V. Yarba

Introduction

The organizers wanted a survey of the field of accelerators, I suppose, in order to gauge where our science is going. Rather than review the already initiated construction projects, e.g. LEP, TEVATRON, UNK, TRISTAN, I decided to adopt the theme: The Accelerator After Next, and to discuss only uninitiated accelerators. It seemed appropriate to begin with one of our leading theorists and charge him with motivating the discussion. I have also asked John Adams to contribute a report on the recent activities of ICFA, the International Committee on Future Accelerators

--

a very relevant subject. Clearly, I have omitted important sources of material and I do hope these omissions will be rectified by comments from the floor.

THE IMPENDING DEMISE OF HIGH ENERGY ACCELERATORS Abdus Salam

International Centre for Theoretical Physics, Trieste and

Imperial College, London

There is no question but that unless our Community takes urgent heed, there is the danger that high energy accelerators may become extinct, in a matter of thirty years or so.

Consider the case of CERN, representing European High Energy Physics. With LEP, completed in 1987 with CM Energy " 1/10 TeV, CERN will have inherited a magnificent tunnel of 27 km circumference. One may expect to install in it a proton ring which, by the year 2000, may provide ep, pp, and pp with CM energies (optimistically) up to 10 TeV (assuming applicability of 5 T magnets) and 20 TeV (for magnets of 10 T). But that will be the end; this is because high energy accelerators have become liike dinosaurs: large, costly, energy- and site-intensive, impersonal. And much worse, because (except for stochastic cooling) no new ideas have been worked out prototypically for thirty years in accelerator building.

Contrast this with the expectations of the theorists, and their natural habitat, so far as energy-ranges are concerned.

Article published online by EDP Sciences and available at http://dx.doi.org/10.1051/jphyscol:1982379

(3)

C3-608 JOURNAL DE PHYSIQUE

Up to 1965, we were content with Yukawa's legacy of m and Regge slope (a " 1/10 GeV) as energy units. After that date, we, hesitantly 71

and with shyness, graduated to 1/10 TeV range of comparison with theory. This energy range land beyond) has now been experimentally realized with 1/2 TeV of the pp collider. Around 1974, with dramatic suddenness, came the realization that the SU(3)

,

SU(2) and U (1) gauge forces, if extrapolated in energy, using renormalisation group ideas, would carry us to 10" TeV. And then, in 1976, with supergravity ideas, and the possibility of unification of gravity with other forces, the Planck energy Mp " 1016 TeV came to be accepted as the natural habitat of particle physics.* This catalogue of high energies is depressing for accelerator building. But even more depressing is the statement which has been made that there may be no new physics between 0.1 TeV and 10" TeV

--

the desert syndrome.

Let us examine this statement. The desert syndrome is a consequence of two assumptions: (1) assume that there are no new gauge forces except the known SU (3), SU (2) and U(l), between the presently accessible 0.1 TeV and 'an uppgr energy Ao. (2) Assume that no new particles will be discovered in this range, which might upset the relation sin28 (Ao) = 3/8. Satisfied for known quarks and leptons, with these Wtwo assumptions, the renormalisation group extrapolation shows that the effective couplings of the three gauge rorces SUc (3)

,

SU (2) and U (1) converge to the same value at the same energy A. and, fur?her, this A has the high value " 10" TeV. To put it irreverently, assume tha? there is a desert of new physics

--

by

new physics imply new gauge forces

--

then the theory will show that this assumption can be self-consistently upheld, with the desert stretching up to 10" TeV.

Clearly, one may question the basic assumptions. To motivate this questioning and to define the intermediate energy scale at which new physics may be discovered (and at which the next generation of accelerators may be aimed) one should examine the weakness of the grand unified theories, i.e., the minimal SU(5) or SO(10) or E6 or the maximal SU(16) which incorporate SU (3) x SU (2) x U(1)

.

It is well known that all these items are uniformly embarrassed by the following difficulties: (1) The profusion of the Higgs sector and parameters associated with it; (2) the existence of three

--

apparently identical

--

families; and (3) the theoretical problem of hierarchies, i.e., the theoretical inconsistency, in a perturbation context, of having just two scales in the theory (0.1 TeV and 10" TeV), so widely separated from each other. It is these weaknesses (and their amelioration) which provide us with clues to new physics and possible intermediate energies for the new accelerator to explore. Consider these in turn:

The Higgs Sector

The Higgs Sector of the gauge theories is at once an embarrassment as well as a source of richness in physics.

Embarrassment

--

because each Higgs particle introduces into the theory, on the average, at least 5 new undetermined parameters.

Richness

--

because with the Higgs particles is associated most of experimentally accessible physics, quark, lepton and neutrino masses, gauge masses, axions, N-m,

H-R

oscillations, proton decays into leptons (as contragted to anti-leptons) , cosmolagical phenomena, and

*

True enough, after 1979, with the work of Cremmer and Julia, and the revival of Kaluza-Klein higher dimensional theories, there has been a slight remission in energy scales. We believe, for example, that already at a lower energy, e / 2 ~ Mp, of the order of 10'' TeV, space-time will have blossomed from four to eleven dimensions, so that our whole epistemology may have to be revised.

(4)

all the exciting early Universe scenarios. To take a concrete example, the minimal SU(5) which started life with just two Higgs (a 5 and a 24) and with just 10 Higgs parameters, has recently been supplemented with (5, 10, 15, 45, 50, and 75) of Higgs to accord to it the desirable richness of testable physical phenomena.

Now how can one compute these parameters from some fundamental theory? One answer, favored for the last three years, was to consider these Higgs as composites, dynamically held together by a new type of gauge force

--

called the techni-colour force with an associate (confinement) scale of around 1 TeV. This idea has recently run into difficulties with flavour-changing neutral currents. It has now been replaced by the hypothesis that all presently known particles, quarks, leptons, Hiqqs, as well as the gauge particles may be composites of a next level of elementary entities

--

the preons.

The force which binds preons together replaces the techni-color force. In this picture quarks and leptons would have inverse radii between 10 and 100 TeV. I would like to suggest that the next generation of accelerators should aim at this preonic level of structure (100 TeV). It is indeed at these energies that the quark form factors will show experimentally. It is relevant in this context to remark that the present variety of indirect experiments on lepton sizes do give 10-100 TeV as the inverse radii of these particles.

These are experiments which relate to the following processes:

Experiment

-

Radii

e / ~ universality * (100 ~ev1-l

T universality " (1 T~v)-

The estimated radii are somewhat model-dependent on the definition adopted for radii. More conjectural than 100 TeV as an intermediate energy are the possibilities of higher intermediate levels which may be associated with the breakdown of the left-right symmetry (with its V

+

A current and predicted existence of V ) characteristic of all grand-unifying theories except SU (5)

.

Such a Breakdown may manifest itself anywhere between 1/3 and 1000 TeV. And, finally, there are the intermediate energies associated with the resolution of the hierarchy problem. A? is well known, a recent suggestion to resolve this difficulty of grand-unifying theories is through an assumption of supersymmetry

--

a postulated Fermi-Bose symmetry. Such a symmetry would have a characteristic scale breaking associated with it. Again, this may range anywhere between a few TeV to 10' TeV, though the indirect effects of supersymmetry may manifest themselves much earlier.

It is perhaps interesting to remark in the context of these intermediate energies that 100 TeV is also associated with the possible decays of protons into three leptons or three antileptons, while lo6 TeV is associated with decays into a single lepton.

My summary conclusions are as follows:

1. Do not ask theorists at which energy to aim for the next generation of high energy accelerators. Aim at the highest possible. One may recall Lord Kelvin who (dazzled with what his generation had accomplished in the nineteenth century) remarked in his address to the British Association for Advancement of Science: wThere is nothing new to be

(5)

JOURNAL DE PHYSIQUE

discovered in Physics now; all that remains is more and more precise measurement." This happened to be the same year when (subsequent to Lord Kelvin's speech) J.J. Thomson announced the discovery of the electron!

2. The chief limitation to achieving of higher energies in accelerators, I believe, are the present rather low values for gradients of accelerating fields, which range no higher than a few tens of GeV/km. With proposed laser accelerators

(employing collective ions, gratings, and, particularly, laser beam concepts), such gradients may be raised (I believe) up to 100 TeV/km. At present, these laser accelerators are no more than paper concepts. It is essential that these ideas are pursued with vigour, with young theorists in the teams to be constituted and generously funded, by the Universities and by the national accelerator laboratories in Europe, U'SA, USSR and Japan. This last remark is particularly addressed to our Chairman.

3 . For the decade between 2001 and 2011, I would suggest that the community should set as a target the design, installation and the operation of a 1000 TeV CM accelerator, possibly employing laser acceleration ideas. John Mulvey has asked me to advertise an ECFA meeting to be held at Oxford between 27th and 30th September to consider such projects.

4. And, finally, I would like to remind the members of our community that the ultimate accelerator will perhaps consist of electromagnetic bottles of monopoles of 1013 TeV, culled from iron ore concentrations heated above Curie temperature, as suggested by Cline at the Venice Conference.

THE FUTURE OF ELECTRON COLLIDERS*

Burton Richter

Stanford Linear Accelerator Center

Stanford University, Stanford, California 94305

The construction of the first electron storage ring, the Princeton-Stanford machine, was begun in 1958. The construction of LEP, the newest and largest of the electron storage rings, is beginning now in 1982. In this period of about twenty-five years the radii of these machines have grown five-thousandfold, from about 1 meter to about 5 kilometers. At the same time the energy of the machines has increased a hundredfold, from the 500 MeV of the first machine to the 50 GeV of LEP. I believe many more storage rings will be built in the future, but these machines will not significantly advance the energy frontier for e'e- physics beyond that which can be reached in the second phase of LEP (about 200 GeV in the center of mass).

*Work supported by the Department of Energy, Contract DE-AC03-76SF00515.

(6)

It is the scaling laws for storage rings which will limit their advance in energy. When electrons are bent in a circle they emit synchrotron radiation and the energy loss per turn required to make up for this synchrotron radiation goes up as the fourth power of the energy divided by the first power of the bending radius.

Radius-dependent costs, such as magnets, tunnels, etc., power-dependent costs for the rf system required to make up synchrotron radiation losses, constraints on machine design coming from the beam-beam interaction and the focusing system result in a system of equations that allow the designer to minimize the cost of a machine. For an electron storage ring, the minimum cost solution is one where cost and size are proportional to the square of the center-of-mass energy. The same scaling law is obtained whether a superconducting or conventional rf system is used.

LEP-I costs about $500 million to obtain 100 GeV in the center of mass. I will guess that LEP-I1 at 200 GeV in the center of mass with superconducting rf will cost about $200 million additional. Using the scaling law implies that a 1-TeV machine, which is a non-unreasonable next step, would cost about $17.5 billion, have a circumference of nearly 700 kilometers, and consume gigawatts of electric power. While the cost of such a device is negligible compared to the arms budget of the world (very roughly $600 billion per year), it is quite large compared to the total high energy physics budget of the world (about

$1.4 billion per year). The fiscal feasibility of such a storage ring is in doubt, and, in addition, there is some evidence that there are technical problems in building machines this large.

This situation, wherein cost or technical limitations closes the energy frontier for a given type of accelerator is not new. An e~amination of the Livingston plot (Fig. 1) shows that we have faced this problem often in the past. For many years the energy frontier for accelerators has moved up by a factor of 10 every six years. We have maintained the pace by switching to new types of accelerators when one type has reached technical or fiscal limitations.

Is there an alternative to the storage ring? I think there is, and I think it is the linear collider system whose scaling laws were worked out at the first ICFA workshop by Tigner (Cornell), Skrinskii (Novosibirsk), myself, and others. The luminosity of these machines is proportional to the power in the beam and independent of the energy. The scaling law is such that the cost and length of a facility, where two linacs fire intense electron and positron bullets at each other, are proportional to the first power of the energy rather than to the square. Linear colliders tolerate a much stronger beam-beam interaction than do storage rings, and the beam-beam interaction seems to enhance the luminosity rather than to decrease it, as is the case in storage rings. There are new issues in accelerator physics involved in linear colliders, among which are the production and control of micro-size beams at the collision point, and the handling of peak currents in linacs that are a hundred times more intense than we are used to.

At SLAC we hope to start building a variant of the linear collider scheme

--

the SLC

--

in late 1983, if the U.S. Government follows the recommendations of its Department of Energy Advisory Committees and supplies the funds. The SLC, when completed (at the end of 1986 at the earliest) will allow us to investigate such things as the beam accelerator interaction, the beam-beam effect, control problems, etc., as well as to carry out an exciting high energy physics experimental program at 100 GeV in the center of mass.

(7)

C3-612 JOURNAL DE PHYSIQUE

But what about very large linear colliders? The SLC doesn't advance the energy frontier beyond that which can be reached by the LEP storage ring. I guessed earlier that the next step in big colliders would have to reach about 1 TeV in the center of mass, where theory indicates that the next mass scale might lie. If we use the Weinberg-Salam model as a guide, such a machine will need a luminosity of l o s 3 cm-2 sec

',

for above the Z' mass, the cross section decreases as the square of the energy. Extrapolating from some parameters of the SLC, I find that a machine with a luminosity of will require a beam power of about 7 MW per beam. If we can make energy efficient, low cost per unit length accelerators the world high energy physics community can afford such a machine at its present budget level.

However, we don't now know the best way to build such a machine and an intense RLD program will be required to determine the best road.

The first decision which one might make I would call the "warm or cold" decision, i.e., normal or superconducting rf structures. Table I shows what one might expect for a superconducting system based on presently achievable Q. The table assumes a Q of 5 x lo9 at S-band, a refrigerator efficiency of 0.1% at 2.3' K, a heat leak to room temperature of 2 watts per meter; and then displays as a function of accelerator gradient the length of the system, the refrigerator required to handle the load coming from finite Q and the refrigerator power required to handle the load from the heat leak. At present we can probably obtain a qradient of 5 MW per meter reasonably reliably.

TABLE I

Some parameters of superconducting linear colliders.

For various values of accelerating gradient ( G ) , I give the total length of the two linacs (2L), the refrigerator power required because of finite Q(P ) , the refrigerator power required because of heat leaks (PL calculatgd for a heat leak of 2W/m) and the total refrigerator power (pT).

(8)

L. L e d e r m a n C3-613

This gradient is at the power consumption minimum, but using some cost per unit length figures for superconducting structures from Tigner, it is not at the cost minimum. The cost minimum would occur at a gradient of about 20 MV per meter. At this minimum the construction costs for a superconducting system would be, very roughly, down by about a factor of 5 from the scaled storage ring. I would conclude that superconducting systems need considerable work on improving the attainable gradient and cavity Q's to reduce costs significantly.

There is much activity in the study of warm systems, all of which emphasizes high accelerating gradients which are required to reduce the effect of the beam accelerator structure interaction and to reduce capital costs. Below is a brief description of four systems that I know about

--

there may be more.

a. Conventional rf structures with high-power sources. At SLAC in the mid 1960's it was shown that copper can stand surface fields of at least 150 MV per meter. These fields imply, in properly designed structures, accelerating gradients of more than 100 MV per meter.

Such structures need gigawatt peak power sources to drive them. The Novosibirsk group will soon be testing a structure designed for these kinds of accelerating gradients, and preliminary design studies on structures and power sources are going on at SLAC.

b. RF transformer systems. These transfer the energy from a low energy, high current beam to accelerate a lower current beam. An example of this type of system is the "Wake Field'' accelerator of Voss and Weiland which will be discussed later at this meeting.

c. Laser accelerators. One of the early suggestions for a laser accelerator was that of Palmer which proposed the use of the longitudinal field near a grating for accelerating particles. A system which promises a larger phase acceptance is what might be called a laser "beat wave" accelerator recently described by Tajima and Dawson. In this system two laser beams are fired into a plasma.

The difference in frequency of the lasers is equal to the plasma frequency. This generates a traveling plasma wave with a large electron density that does the acceleration.

d. The ionization front accelerator. The advance of a high current, low energy electron beam entering a neutral plasma is controlled by the ionization of the plasma with an auxiliary laser.

The ionization front is made to travel in synchronism with the velocity of the particles to be accelerated. Olsen et al., Sandia, have demonstrated proton acceleration of about 5

KV

10 cm with this system.

One can see there are many new ideas for accelerating systems.

Not all of them will be applicable to the acceleration of the very small, very intense beams required for linear colliders, but in the next few years we will have to see which of these (or other) systems shows the most promise and to begin prototype accelerator system studies to evaluate costs and technical feasibility. I would hope to see a 1 GeV accelerator, less than 10 m long, in the late 1980's.

Once we have reached this stage we can then begin a large-scale physics machine aimed at reaching greater than 1 TeV in the center of mass. Since many of these promising ideas are new, there are no

"experts" and any of the physics community can contribute. I look forward to an exciting decade of development.

(9)

JOURNAL DE PHYSIQUE

5-10 TeV PROTON COLLIDING BEAMS AND SOME PHYSICAL ASPECTS V. A . Yarba

Institute of High Energy Physics Serpukhov, Moscow District Union of Soviet Socialist Republics

It is not so easy at present to predict what problems of elementary particle physics will be acute in the year 2000. However, some of the modern experimental results and theoretical predictions make us think that some of the problems such as heavy quarks, vector bosons, compound nature of quarks and leptons will still remain uninvestigated or quite open.

The mass of heavy, e.g., t-quark, seem to lie in the region of 18

<M <300 Gev/c2. It can also be assumed that heavy quark production cr&ss section obeys Scale invariance

Using experimental data on strange or charmed particle production cross section, we have:

About lo6 events may be expected during a reasonable period -yf experiment on the pp-ring (10' s) at luminosity of cm s

.

These events will possess a number of peculiarities, i.e.,multijet character, large mass of jets, many leptons, neutrino energy emission.

This gives a possibility to select events with heavy quarks against the background of usual hard processes.

Modern theory quite su,ccessfu~ly predicts properties of intermediate vector bosons. The e e beams are the best for their investigation. However, there may exist several intermediate bosons with different, even very large, masses. In this case the pp-ring can provide us with very valuable information+about properties of these particles. If the coupling constant of W- gnd Z 0 with usual quarks is defined with the Fermi constant, then for W- production cross section we shall have

where T = m6/s, LUa is luminosity in the parton model.

aw(m = 300 G~v/c',

/s

= 6 TeV) = cm2

Suc$ cross sections give us a hope to detect quite reliably heavy vector W - and Z a bosons on colliding pp rings.

(10)

L. L e d e r m a n C3-615

Finally, if quarks and leptons are compound particles, extremely unusual effects may be observed. From the data on anomalous magnetic momenr of muon it follows that the radius is not more than r " 1/0,5

(TeV)

'.

Let this radius be characteristic for other fermions as well.

At high energies the interactions of compound fermions yith each other, which are similar to low energy interactions of usual hadrons, would result in multiple production of various particles. In the case in which leptons and quarks consist of one and the same elementary objects, there should be quite a significant amount of leptons in the products of interaction between compound quarks. A large transverse momentum of interaction products is another important feature of these interactions. As the upper limit for the cross sections of these processes at high energies we shall adopt a =

~4

rfi = cm2.

(Here N is the number of quarks in protonr. At the UNK energies the cross sgctions may quite noticeably be lower, but the events themselves are so extraordinary, that they may be detected even at very small cross sections, almost cm2. The main difficulty in this case will be the background from usual heavy quarks, if they exist at all.

In connection with the P-parity violation in weak interactions authors have put forward an approach based on gauge symmetry of electroweak interactions, which leads to new experimental trends, i.e., search for heavy mirror quarks and leptons (m s 1 TeV), and intermediate vector electroweak W boson (100-1000 GeV).

The technicolor model predicts many new phenomena, among them new particles which are constituents of leptons and quarks. For qa (technieta)- color pseudoscalar particle with the mass of 240 GeV ong may obtain

ilt may be produced in gluon-gluon interactions and decay mainly into most heavy t-quarks.

Note that qt is produced on gluons, therefore the energy dependence of the cross section will be

This fact does not allow us k o observe

nt

at ISR, Doubler and CERN Collider.

The examples considered above lead us to a conclusiop that the 3 TeV x 3 TeV colliding beams with luminosity of cm- S-' are very promising for the investigations in the field of new physics in TeV energy region.

There are, however, numerous indications from cosmic ray experiments that newz6various effects are observed at high energies with very large ( " 10- cm2) cross sections; in this the energy threshold when these effects manifest themselves is lo2

-

lo3 TeV

(500-1500 GeV in the c.m.s.)

.

We have enumerated the most interesting ones:

(11)

C3-6 16 JOURNAL DE PHYSIQUE

-

at the energies above 100 TeV the scale invariance is greatly violated even in the fragmentation region:

-

the jets with large transverse momentum start to appear more of ten;

-

the production cross section for long-range (charmed?) particles grows quite noticeably;

-

there appear new separate events, crucially different in their basic properties from standard phenomena, centaur, binuclear jets, etc.

Two conclusions follow from these data:

First, even at comparatively low luminosity, qualitatively new phenomena may be observed in the investigations of hadron-hadron interactions at 1 TeV colliding beams;

Second, these new phenomena may mean that a simple extrapolation of the quark-parton model, used for peripheral process analysis and for hard process calculations, may turn out to be wrong when going to superhigh energies, which makes the estimates of the cross sections above not very reliable.

New experimental results on microworld properties will obviously be obtained at new accelerators of FNAL, CERN, IHEP and others as soon as they are put into operation. Still further progress in elementary particle physics is connected with colliding beams and very big accelerators with fixed targets with a corresponding progress in technology, e.g., superconducting magnets and, possibly, new accelerator ideas.

CONTRIBUTION TO THE PANEL DISCUSSION ON NEW MACHINES G. A. Voss

Deutsches Elektronen

-

Synchrotron DESY Hamburg, Federal Republic of Germany

Present day accelerators, i.e.,synchrotrons and linear accelerators, storage rings and colliding linacs, are limited in their maximum energy by economical aspects only. I have collected prices for the more important components, and I want to discuss in particular the impact of the new superconducting technologies on the energy cost picture.

In the first diagram you see tunnel prices per meter in 1981 U.S. Dollars. Depending on geology and technique used they may vary by a factor of 2 about an average of approximately $4000 per m. Also shown are prices for conventional 2 T proton synchrotron magnets and 4.5 T superconducting magnets per m. The latter ones include costs

(12)

for the refrigeration system. Magnet systems and the tunnel costs are the main expenses for large proton synchrotrons and proton storage rings. A higher magnetic field allows us to reach the same energy with smaller machines. Although individual prices may be a matter of great controversy, the conclusion which I draw from these numbers may not be so controversial: Today's superconducting magnet technology makes only a very marginal improvement in cost efficiency of accelerators. The greatest advantage of superconducting magnets is given if a tunnel already exists and one aims for the highest proton energy within that tunnel, or whenever the available real estate puts a limit on the size of an installation.

While the costs of proton machines increase approximately linearly with energy, costs for electron/positron storage rings increase quadratically. The quadratic dependence arises due to the quadratic increase in size of the installation with energy. This in turn has its reason in the steep increase of synchrotron radiation losses with energy which causes the economically optimum radius to be proportional to E

.

In the next figure you see this quadratic dependence.

Those contributions which will not increase quadratically with energy (experimental halls, access tunnels to the halls, injection and general infrastructure) have been taken out of the quoted prices. LEP (phase 2) and PETRA lie very closely on such a parabola.

Superconducting radiofrequency allows higher rf gradients, and this in turn allows smaller machines. I believe t at the radius and thereby the price is proportional to (rf gradient)-'?'. Shown in this graph is also the cost/energy curve for electron/positron storage rings with superconducting cavities operating at a gradient of 3 MV/m (as compared to the 1 to 1,2 MV/m for conventional rf systems). CESR I1 lies exactly on this curve and scales also in size in the described manner. 3 MV/m seems to be a realistic goal for superconducting rf systems in the opinion of many experts. 10 MV/m in my opinion must be considered the extreme of permissible optimism. This curve is also shown. The same figure shows also the cost of present day linear accelerators to be used in linear collider schemes. The costs increase linearly with center of mass energy (E ) Power doubling schemes like SLED decrease this cost gradientyms~y conclusions from this graph are that

1. supefco~ducting rf technology may increase the maximum energy of e / e storage rings by a factor of 1,3 or reduce the cost of machines at given energy by a factor of 2;

2. linear colliders will win in the race to higher energies, but the cost of present day linear accelerators are prohibitive;

3. to make linear colliders economically feasible, accelerating gradients are necessary which are much higher than the 10

-

20 MV/m obtained today.

Fig. 3 shows the cost estimates which have been used for Fig.

2. One can see that in a linear accelerator the cost of the accelerator structure together with the klystrons, modulators and power supplies dominates by far the tunnel costs. Only higher gradients in the structure will significantly decrease the overall costs. Many people have recognized this problem and have made suggestions on how to break through this cost barrier. Tom Weiland and I have studied a new accelerating method which may produce gradients in the range of 300 MV/m. The acceleration is done by the wake field of a very intense relativistic electron beam. By designing an appropriate structure it should be possible to produce this very

(13)

JOURNAL DE PHYSIQUE

U.S.

8

S.C. Cosf IN

Con". Cost = O m

( I981

F i g u r e 1 F i g u r e 3

MILLION

F i g u r e 2

F i g u r e 4

(14)

L. Lederman C3-619

high field in a certain location. A second beam can there be accelerated while the intense driving beam experiences only a much smaller wake field deceleration. The structure acts like a transformer where a high current beam has a small voltage loss and accelerates a low current beam to high energies. Of the many conceivable structures the one shown in Fig. 4 can be exactly calculated with the help of T. Weiland's B.C.I. computer code. The driving beam has a ring-shaped geometry containing 6x1012 electrons.

Its wake field losses are 31 MV/m. The high field is created in the center of the structure and amounts to 290 MV/m. The "transformer ratio" is thereby roughly 10 : 1. There are problems left to be analyzed:

1. The transverse stability of the electron ring;

2. the best conventional acceleration method for the driving beam; and

3. the optimum structure design

There are some notable aspects to this method:

1. Since the acceleration field exists only for extremely short times (lo-" sec) in a very limited region, breakdown problems may not be serious.

2. The phase of the acceleration field is mostly determined by the driving beam, and the structure dimensions are much less critical than those in linear accelerators.

3. Electron rings with these dimension and intensities have already been produced in collective electron ring accelerators and may be technically quite feasible.

I believe it requires fundamentally new accelerating techniques to make significant progress in our striving for higher energies.

Wake field acceleration may possibly be a practical method.

REMARKS TO PANEL DISCUSSION ON NEW MACHINES*

Robert B. Palmer

Brookhaven National Laboratory Upton, New York 11973

I. Energy vs. Luminosity Considerations

I will first report briefly on some work I did as part of a study group at the Division of Particles and Fields Workshop at Aspen in June of this year. The hadron collider study group was led by myself and John Peoples and set out to answer the question of what energies and luminosities would be required to reach a 1 TeV mass scale. It was decided to select a number of "bellwether" experiments for which one could ask what energy and luminosity would be required to observe 100 events at or above the 1 TeV scale in lo7 seconds of running. The

(15)

C3-620 JOURNAL DE PHYSIQUE

"scale" was defined as either the mass of some hypothetical particle that should be observed or the root of the Q' in some high momentum transfer process. In each case it was required that a plausible case be made that backgrounds could be suppressed and that the solid angles used were reasonable.

The "bellwether" experiments involving a momentum transfer were:

1) Observation of high transverse momentum jets 2) Observation of high transverse momentum no's 3) Observation of high transverse momentum y's.

In each case the object is to observe deviations from the expected fall off with transverse momentum. Such deviations would occur either because of substructure in the quarks and gluons or because of the existence of massive new particles that decay into the jets, a's or y's. The jets having the highest production cross section would explore the highest scale; the pions are the most direct, although perhaps the easiest experimentally; and the direct single photons are certainly hard to identify at high momenta.

The experiments exploring masses were:

4) P + ~ - Drell Yan Continuum 5) Super heavy Z O (decaying to p'p-)

6) The Technieta

nT

(decaying to two heavy jets)

7) The Supersymmetric gluino (decaying to photino

+

2 jets).

In the case one is looking for deviations from QCD prediction.

In the other cases one is searching for new classes of particles which may or may not exist. The three examples cover production mechanisms that vary from super strong (n enhanced by technicolor permutations) though electro-weak (2''s)

To

the relatively weak production of supersymmetric particles.

For each experiment a reasonable theoretical calculation was used (see proceedings of Aspen Workshop) and the scale limit determined for a matrix of luminosities and energies. Table I shows the numerical limits obtained and Figs. 1-3 show hand drawn fits to these limits as a function of machine center of mass energy for 3 different luminosities: lo3 O , 10' and lo3'.

We note:

a) At lo3 luminosity five of the seven experiments fail to attain the 1 TeV scale even at fi = 40 TeV. At fi = 10 TeV the mean fraction of 6 that is observed (s ) is only of the order of 1%.

The contribution from valence qu%rks is then almost negligible, direct single photons are swamped by no's and some QCD calculations are invalid.

b) At 10" lumino'sity only two of the experiments fail to reach the 1 TeV scale by fi = 40 TeV, and the mean xT at = 10 TeV is now about 10%. Single photons can be observed and valence quark contributions are significant.

c) At lo3' luminosity all the experiments reach the 1 TeV scale by

(16)

L. Lederman

TABLE I

*In t h e s e c a s e s t h e y / x O r a t i o i s less t h a n 1% and s e p a r a t i o n may not be p o s s i b l e .

Experiment

# l Jets

1 2 ,to

ti3 y

!I4 p+p-

/15 Z

#6

nT

1 7 Gluino

Luminosity see-1 1030 1032 1034 1030

lo32

1034 i o 3 0 i o 3 2 io34 1030

lo32

1034 1030 1032 i o 3 4 1030

lo32

1034 103 0

lo32

1034

,

Js=40 TeV 2.4 8.0 14.0 -38 1.6 5.0

.04*

.14*

.41 .10 .40 1.7

.33 1.6 4.2 1.5 5.0 12.0

- 5 0 1.2 2.1 J s a . 8

TeV .36 .50 - 6 0 - 1 6 .30 .42 .07 .12 .18 .10 .ll .20 .10 - 2 2 .32 .13 .30 .40 .06 .ll - 1 5

M o r Q J s = 2

TeV .70 1.1 1.4

.24 .54 .88 .07 .17 - 3 1 .10 .18 .40 1 4 .45 .80 .30 .60 .90 .07 .22 .35

I

qimit TeV J s = l O TeV 1.7 3.6 5.4 .34 1.1 2.6

.06 .18 - 4 6 .10 .30 1.1

.30 1.2 2.8

.70 2.0 3.7 .20 - 6 0 1.3

(17)

JOURNAL DE PHYSIQUE

Machine c of m Energy 6 , TeV Mochlne c of in E n e r g y G , TeV

Figure 1 Figure 2

Figure 3

10 y L u m ~ n o s ~ t y =

- - - -

-

5

-

+

-

0

L 0

2 1 - -

'D

-

>

- -

n -

0

:

-

5

=

D 01: -

-

-

-

I , , , ! I I I 1 1 1 1 1 1 1 I I t

I 10

Mach~ne c of rn Energy G', TeV

(18)

L . Lederman

T O Production Maximum Q

Machine c of m Energy 6 , TeV

Figure 4

jet

jet

5+5 TeV 2 0 + 2 0 TeV

P P

-

P P

L= L= 1030

Figure 5

(19)

C3-624 JOURNAL DE PHYSIQUE

fi = 40 TeV; i n d e e d t h e y do s o e s s e n t i a l l y by 6 = 1 0 TeV. The mean xT a t

6

= 1 0 TeV i s a b o u t 2 0 % , a n d j e t s c a n b e s e e n u p t o a n XT o f 6 0 % ; i . e . , we a r e a p p r o a c h i n g t h e k i n e m a t i c l i m i t and h i g h e r l u m i n o s i t y would g a i n u s l i t t l e .

Our c o n c l u s i o n s a r e t h u s t h a t 10" i s t o o low f o r a f u t u r e hi?;

e n e r g y c o l l i d e r , i s t h e minimum t h a t s h o u l d b e aimed a t and 1 0 would b e d e s i r a b l e .

A n o t h e r way o f d i s c u s s i n g t h e p r o b l e m is t o a s k what t h e t r a d e o f f o f e n e r g y and l u m i n o s ? t y i s i n t h i s domain. F i g u r e 4 shows a p p r o x i m a t e c o n t o u r s o f e q u a l mass s c a l e on a n E n e r g y v s . L u m i n o s i t y p l o t f o r an " a v e r a g e n e x p e r i m e n t : h i g h pTnO p r o d u c t i o n , From t h i s w e c a n r e a d o f f t h a t a 5 TeV, 3 x 1 0 ~ ~ m a c h i n e would r e a c h t h e same 'Q a s a 1 5 TeV, 3 x 1 0 ' ~ machine. Thus f o r t h i s c a s e a f a c t o r o f 1 0 i n l u m i n o s i t y is w o r t h a f a c t o r o f 3 i n e n e r g y !

C l e a r l y , i t is more d i f f i c u l t t o u s e t h e h i g h e r l u m i n o s i t i e s and more d i f f i c u l t t o b u i l d a h i g h e r l u m i n o s i t y m a c h i n e b u t t h i s must b e compared w i t h t h e enormous c o s t d i f f e r e n t i a l i n a f a c t o r o f 3 i n e n e r g y .

F i n a l l y w e c a n compare t h e m a s s l i m i t s a t t a i n a b l e b y a fi = 1 0 TeV ( 5 o n p p m a c h i n e a t w i t h a fi = 40 T e V ( 2 0 o n 20) Fp m a c i n e a t l o . F i g u r e 5 shows s u c h a c o m p a r i s o n . F o r e v e r y e x p e r i m e n t t h e l o w e r e n e r g y h i g h l u m i n o s i t i e s machine r e a c h e s f a r h i g h e r mass s c a l e s t h a n t h e h i g h e r e n e r g y low l u m i n o s i t y c o l l i d e r .

Of c o u r s e t h e r e w i l l b e many " h a r d " e x p e r i m e n t s n o t n o o b v i o u s l y r e l a t e d t o t h e s e e n e r g y - l u m i n o s i t y c o n s i d e r a t i o n s . F o r some o f them low r a t e a t h i g h e n e r g y may o f f e r e x p e r i m e n t a l a d v a n t a g e s ; f o r o t h e r s a need f o r h i g h s t a t i s t i c s w i l l r e q u i r e more l u m i n o s i t y . The, o b s e r v a t i o n of some new "much s t r o n g e r t h a n QCDn i n t e r a c t i o n would n o t r e q u i r e l u m i n o s i t y and would f a v o r e n e r g y a l o n e , b u t t h e r e is l i t t l e r e a s o n t o e x p e c t s u c h a t h i n g .

11. High L u m i n o s i t y Machine ,Design

The l u m i n o s i t y L o f a c o l l i d e r w r i t t e n i n t e r m s o f t h e i n t e r a c t i o n l e n g t h R i i s i n d e p e n d e n t o f w h e t h e r o n e h a s head o n c o l l i s i o n s of b u n c h e s o f l e n g t h R i , o r f i n i t e a n g l e c r o s s i n g w i t h c r o s s i n g l e n g t h li. Assuming t h a t t h e beams a r e c y l i n d r i c a l l y s y m m e t r i c a t t h e c r o s s i n g p o i n t :

w h e r e f is t h e f r e q u e n c y o f r o t a t i o n o f p a r t i c l e s i n t h e r i n g s , N t h e number o f s u c h p a r t i c l e s [ t o i n c l u d e t h e Fp c a s e N~ s h o u l d b e r e p l a c e d by N ( p ) N

(P)

I , y i s t h e e n e r g y , E i s t h e i n v a r i a n t e m i t t a n c e , B t h e f o c u s p a r a m e t e r a t t h e i n t e r s e c t i o n and d t h e d u t y c y c l e (bunch l e n g t h / s p a c i n g ) . F o r h i g h l u m i n o s i t y o n e p r e f e r s DC o p e r a t i o n f o r e x p e r i m e n t a l r e a s o n s s o d = 1. R . / B i s bounded from t u n e s h i f t c o n s i d e r a t i o n s t o some s m a l l v a l u e : I t k k e 1/3. E i s t y p i c a l l y 1 0 n

m e t e r s t e r a d i a n s . I f w e w i s h t o k e e p t h e numbers o f p r o t o n s a s low a s p o s s i b l e t h e n f s h o u l d b e h i g h t h u s p r e f e r r i n g a h i g h m a g n e t i c f i e l d ; I t a k e 1 0 T e s l a g i v i n g f = 22 K c f o r 5 TeV.

With t h e s e p a r a m e t e r s o n e o b t a i n s 10" l u m i n o s i t y w i t h 1 . 2 10'' p r o t o n s s t o r e d ( t h i s i s t h e same number o f p r o t o n s a s c o n s i d e r e d i n t h e ICFA Fp s t u d y and c o r r e s p o n d s t o 2 amps c i r c u l a t i n g ; t h e ISR s t o r e s " 50 a m p s ) . T h i s i s n o t u n r e a s o n a b l e .

(20)

L. Lederman C3-625

It is interesting to note that if in the same 2 ring machine one put 1013 F v s in one ring and the same 1.2 1015 t:otons in the other, one would obtain a continuous pp luminosity of 10

.

111. High Luminosity Experiments

It goes without saying that continuous beams are needed to keep the instantaneous luminosities down. Timing and vertex tracking should then be able to separate events at luminosities up to

Proportional wire chambers, short distance drift chambers, Cerenkov counters, and scintillators will all operate even close to the vertex at lo3 3. Calorimeters now constructed would work at 10' and can arguably be used at lo3 3. Development of faster read out would enable operation between and 10". Muons experiments using a hadron shield can clearly use lo3' as can small solid angle experiments such as direct photon observation.

Vertex detectors are very new and their present read out schemes would not limit luminosities to below but the devices are intrinsically fast and thus could probably take high luminosity with sufficient development.

If we consider the "bellwether" experiments again we find that experiments 3, 4 and 5 could use lo3' luminosity at once. Experiments 1, 2 and 7 could use lo3'

-

now and higher rates with development of a faster calorimeter. Experiment 6 would be limited to below lo3' until fast vertex detectors were available.

There seems no fundamental experimental reason why the rates from 10'" luminosities cannot be used. One must remember that a typical fixed target machine has a luminosiity of lo3'!

A "NOVEL1' APPROACH TO HIGH ENERGY Leon M. Lederman

Fermi National Accelerator Laboratory Batavia, Illinois 60510

The time is short, but I do have something different to say about future accelerators

--

the result of recent discussions at Fermilab.

We were reminded by Professor Salam that theoretical physics is at an impasse. The mass range near 100 GeV seems well covered by operating (CERN pp) or designed machines. But there is a general perception

bx

what Enrico Fermi called the "high priests of theoretical physics, and some low priests, too, that the mass range up to and above 1 TeV will hold the answers. This implies e+e- machines of ,500 on 500 GeV

--

and both Voss and Richter agree these are in the distant future

--

or hadron colliders of at least 5x5 TeV or better 10x10 TeV. Past ICFA workshops have placed the cost of 20 TeV at over 2 billion dollars requiring international collaboration of such intimacy as to be remote. We asked the question: Is there any way out of the impasse? Can we possibly invent a program of getting to 20 TeV before 2001? This naturally led us to a reconsideration of

old

technology.

(21)

C3-626 JOURNAL DE PHYSIQUE

Our cost reference is the ring made with new technology 4-5 T magnets of the kind we have developed for the TEVATRON. The cost of magnets, tunnel, refrigeration, controls, beam transfer, aborts, etc., come to $23 K/meter at 1982 prices. Assuming modest savings and extending this to 10 TeV, we arrive at a cost of about $1.2 B. Since no site in the U.S. is large enough, the total cost must include a new laboratory and we get to the $2 B figure. Higher field magnets also look very complicated. We have at least three or four years of work before we can understand them from the "accelerator" point of view.

There is a good chance that the costs may increase faster than the field. Lasers and other exotics again require a very long R & D period. So what about old-fashioned iron magnets?

We asked the question: Can one put all of the R & D, imagination, invention into mass production and cost-saving installation techniques to bring down the cost of a well-understood technology? We need superconducting wire to save energy; that is, we are dealing with superferric magnets, and considerable work was done on these in the '60s by G. Danby at BNL and, more recently, at Dubna. Since the radius is now very large (15-20 km), we need a large, flat and unpopulated site: hence "machine-in-the-desert."

What is crucial is the idea that a good quality iron magnet energized with simple superconducting bars can be made by imaginative mass production techniques in hundreds of foot lengths, assembled in a pipe which is buried underground. Using costs of materials, site development, rf, refrigeration, pipeline data, we have a preliminary estimate of $3K/meter for the main ring. We stress that we want reliability and exposure to physics at the earliest times. We must clearly invent all kinds of gadgets to solve the problem of such a geographic machine. There are interesting problems as to the requirements on field quality, lattice design, etc. Any or all of many as yet inadequately considered problems can, in principle, sink our cost estimates. But let us suppose the number we "wished" holds UP.

Then, using an optimistic 2.5T and desiring & = 20 TeV gives us a ring of R = 16Km at a cost of $300M. To this we must add an injector.

Injectors should not cost more than 20% of the cost of the main ring.

Here I may be way off, but let me take $100M. I add $50M for site preparation, roads, an initial, primitive complement of buildings and workman's cafeteria. I add $ 5 0 ~ for 6 interaction regions, $ 5 0 ~ for a p source, and $200M for 6 years worth of incremental salaries and contingency. So I sa "For $750M, we can begin serious observations of collisions at

2'

= 20 TeY!" And we have begun to develop a new high energy laboratory. The assumed flat U.S. budget can probably support this and see physics by 1992-3. With international help this could be advanced two or three years.

There are many Open questions. We are far from sure about our cost estimates. We do not have a convincing lattice yet, let alone proof that it can store. We are not sure about the trade-off of energy and luminosity. Relevant to this, the graphs shown by Dr. Palmer apply to the class of mythical experiments that have detectors clicking away at hundreds of megacycles collecting totally background-free data.

The next year or two should tell us a lot about the feasibility of this approach. At least, this may be the shortest R & D timescale for the multi-TeV machine.

(22)

L. Lederman

PANEL DISCUSSION ON NEW MACHINES J. B. Adams

CERN

My contribution to this Panel Discussion on the next accelerators is different from those of the previous speakers since I will not be describing new ideas for future accelerators and colliders but reporting to you what ICFA has been trying to do concerning the long-term future of these machines. ICFA is the International Committee for Future Accelerators and was set up by the IUPAP Commission for Particles and Fields in 1977 to prepare the way for the day when accelerators and colliders become so big and expensive that it may only be possible to build them by the widest international collaboration. Nobody knows when that day will come but, clearly, it will not be before the present round of machine building is completed and these machines are in operation. We are talking therefore about the machines of the 1990's which may come into operation at the turn of the century.

In order to prepare the way for this future generation of machines, ICFA has launched a number of initiatives in collaboration with the major laboratories in the different regions of world most active in high energy particle physics, namely, the USA, the USSR, Eastern and Western Europe and Japan.

The first of these initiatives was to organize a series of Workshops on the "Possibilities and Limitations of Future Accelerators". These Workshops examined the scientific and technical problems presented by future machines both in general and by considering two specific machines

--

a 20 TeV proton machine and a 700 GeV centre of mass energy electron-positron collider. These Workshops were highly successful in the sense that they brought accelerator experts together from the different regions to discuss the problems and to propose solutions. One positive result was ideas concerning linear ee-' colliders for energies greater than those of LEP which are relevant to the Single Pass Collider of SLAC and the Novosibirsk linear e'e- collider. Unfortunately, the Workshops did not have the effect of increasing very much the amount of effort going into developing new ideas and new technologies for future machines and I shall return to this problem in a few minutes.

The second initiative of ICFA was to work out with the major laboratories a set of guidelines which they could use in making their new machines available to experimental groups from other regions of the world. These guidelines are proving very valuable in internationalizing the use of the new machines which, as you know, are much less numerous than their predecessors, offer different experimental possibilities and are located in different regions of the world

.

The third initiative of ICFA is still uncompleted and is proving to be the most difficult which it has so far attempted. The ICFA Workshops clearly showed that if the design and construction of future

(23)

C3-628 JOURNAL DE PHYSIQUE

machines are based on present accelerator ideas and technologies they will be monstrous in size and probably prohibitively expensive, even for multinational organizations. This was demonstrated by the two machines studied at the Workshops. The 20 TeV proton machine, if it used 5 Tesla Superconducting magnets, would be $0 km in circumference, i.e. twice the size of LEP, and the 700 GeV e e- collider, if the accelerating rate was 20 MeV/m, which is the present day value, would be about 30 km in overall length. Clearly new ideas are needed to increase the maximum attainable MeV/m and also to reduce the costs per MeV and the electrical power consumption of these machines. Some of these new ideas have been described by previous speakers today and as you have heard, they aim at accelerating rates very much higher than present-day values. To illustrate this point, 1 can use a figure of merit which is simply the maximum beam energy of the machine divided by its circumference or length. For present day and currently projected machines, it works out as follows,

Proton machines:

CERN and Fermilab 450 GeV machines 70 MeV/m Tevatron and UNK with 5 Tesla magnets 150 MeV/m

A 10 Tesla magnet machine 300 MeV/m Electron machines:

PEP and PETRA 9 MeV/m

LEP (at top energy) 4 MeV/m

Single Pass collider 17 MeV/m

Test cavities in short lengths about 150 MeV/m

Several of the new ideas promise accelerating rates of many GeV/m which is a very big increase indeed over presently achieved rates but these promises have not yet been realized experimentally, even in lengths of a few meters.

Furthermore, it is by no means clear that these new ideas, even if they achieve these very high accelerating rates, will necessarily reduce the costs per MeV of future machines or their electrical power consumption. Indeed, one laboratory, Fermilab, is wondering whether it may not be better from these points of view to use present-day accelerating rates and to concentrate on ways of reducing the costs per MeV by developing very cheap mass-produced machine elements. Of course, machines designed in this way will be very large but if there is an extensive desert available in the neighborhood, it may offer a cheaper solution.

Finally, it is worth pointing out that it seems to take a very long time to develop new ideas and technologies to the stage where they can be used with confidence in very high energy machines. For example, the first papers on superconducting high field magnets appeared at the International Accelerator Conference in 1961. The Tevatron when it comes into operation next year will be the first large machine to use these magnets. The elapsed time between first paper designs and first operating machine will therefore be 22 years.

A less happy example is the idea of collective field accelerators.

The first papers by Veksler, Budker and Fainberg appeared in 1956, but 26 years later, we still do not have a practical design for a high energy machine based on these ideas. Clearly if new ideas and technologies are to be available for the very high energy machines of the 1990's a determined and substantial research and development effort will have to be launched now in all the regions of the world.

(24)

It is the problem of how to launch an increased effort on new accelerator ideas and technologies which has been preoccupying ICFA during the last year and it has been discussing with the major laboratories whether one solution might not be to set up a coordinated interregional effort. The difficulty is to find an acceptable way to launch such an effort and to find the money to finance it.

Whatever emerges from these discussions, it is generally agreed that we need to interest more young physicists and more groups in advanced accelerator research and development and to raise the status of this work as an academic discipline. This is important not only because the more people involved the more likely it is that new ideas will be generated but also because most of the big laboratories are now heavily involved in designing and building their current new machines. In fact, the list of these new machines is very impressive

-

on the last count I made it included three new proton machines, five new electron machines and two electron-proton colliders.

The message which I think ICFA would like to transmit to this Conference, to both experimental and theoretical physicists, is that if high energy particle physics using accelerators is to continue in the long-term, much more effort is needed on developing new ideas and new technologies for future accelerators. Those who depend on these machines, which includes most of the audience present today, should be the first to help develop these ideas, since their future depends on a successful outcome. And in addition, new groups should be encouraged to enter this fascinating field of science, particularly university groups, and more teaching should be devoted to this subject by the big laboratories and in the universities.

Discussion,

J.H. MULVEY (Oxford).

-

There i s a clear message from the pane2 t h a t t o reach the anergies specified by SaZam new methods of particle acceleration must be found. By the way, the nature of the challenge i s perhaps b e t t e r appreciated i f instead of GeV/m we were t o speak o f TeV/km as a u n i t of acceleration gradient ; some of the novel ideas being discussed ray generate very large f i e l d s but over very small dis- tances, neu problems arise i f they are t o uork ever the lengths needed t o reach multi-TeV energy.

SaZam has mentioned the meeting being organised by ECFA i n association with the Rutherford Appleton Laboratory. This wiZZ be a t Oxford a t the end of September and w i l l review a nmber of new approaches t o particZe acceleration, some of which were mentioned by Richter. We hope t o i d e n t i f y some o f the more promising l i n e s and stimu-

l a t e i n t e r e s t i n further studies of these, especially i n M o p e where very l i t t l e i s now going on i n t h i s area.

I should Like t o join Adams i n exhorting more physicists to take up t h i s work.

We should not r e l y on the b i g acceZerator labs -they a k a d y haveheavy committmentsto t o the next machines -and much o f the needed e f f o r t can be put i n by other hborator;ies and Universities ; we shouZd also seek help from groupdworking on the physics o f plasmas cmd high power Zasers.

Abdus annomeed t h a t a t the Oxford meeting we would "unfold the design for a multi-TeV acceleratorff, t h i s , I'm afraid, i s a l i t t l e too o p t i d s t i c for us t o pro- mise ! The magnetde of t h a t task i s perhaps iZIustrated by the Livingstone law, mentioned by B u r t , which i s an extraerdinarily good descr6ption of the evolution o f centre o f mass energy with time. Taking the slope achieved for e f e - coZZiders over the part decade, i f one extrapolates then Salam's 200 TeV would be reached i n the year 2030 ; we need a break-through quickly !

Références

Documents relatifs

Un protocole thérapeutique simple, applicable dans les centres de santé périphérique où les moyens sont souvent réduits, sera l’objet d’un consensus rapide entre les

A variable such that the initial portion of its name is the concatenation of the name for a ProNET type network interface followed by the octet string represented symbolically

example, when a router of the NSFNET Backbone exchanges routing or reachability information with a gateway of a regional network EGP is

These factors include, but are not limited to, the presence of a competitive offer for ABN AMRO, satisfaction of any pre-conditions or conditions to the proposed offer, including

To this end, we characterized the maternal ancestry of Basque- and non- Basque-speaking populations from the Franco-Cantabrian region and, by sequencing complete mtDNA genomes,

- In-vivo: Students enrolled in a course use the software tutor in the class room, typically under tightly controlled conditions and under the supervision of the

At its first session, the General Assembly decided that the election of Members of the Committee would be conducted on the basis of the electoral groups

New Brunswick is increasing its training of medi- cal students with the opening of the Saint John Campus of Dalhousie University’s medical school.. At the same time,