Monday, 5 February 2024

Rencontres de Physique des Particules 2024

Just over a week ago the annual meeting of theoretical particle physicists (RPP 2024) was held at Jussieu, the campus of Sorbonne University where I work. I wrote about the 2020 edition (held just outside Paris) here; in keeping with tradition, this year's version also contained similar political sessions with the heads of the CNRS' relevant physics institutes and members of CNRS committees, although they were perhaps less spicy (despite rumours of big changes in the air). 

One of the roles of these meetings is as a shop window for young researchers looking to be hired in France, and a great way to demonstrate that they are interested and have a connection to the system. Of course, this isn't and shouldn't be obligatory by any means; I wasn't really aware of this prior to entering the CNRS though I had many connections to the country. But that sort of thing seems especially important after the problems described by 4gravitons recently, and his post about getting a permanent job in France -- being able to settle in a country is non-trivial, it's a big worry for both future employers and often not enough for candidates fighting tooth and nail for the few jobs there are. There was another recent case of someone getting a (CNRS) job -- to come to my lab, even -- who much more quickly decided to leave the entire field for personal reasons. Both these stories saddened me. I can understand -- there is the well-known Paris syndrome for one thing -- and the current political anxiety about immigration and the government's response to the rise of the far right (across the world), coupled with Brexit, is clearly leading to things getting harder for many. These stories are especially worrying because we expect to be recruiting for university positions in my lab this year.

I was obviously very lucky and my experience was vastly different; I love both the job and the place, and I'm proud to be a naturalised citizen. Permanent jobs in the CNRS are amazing, especially in terms of the time and freedom you have, and there are all sorts of connections between the groups throughout the country such as via the IRN Terascale or GdR Intensity Frontier; or IRN Quantum Fields and Strings and French Strings meetings for more formal topics. I'd recommend anyone thinking about working here to check out these meetings and the communities built around them, as well as taking the opportunity to find out about life here. For those moving with family, France also offers a lot of support (healthcare, childcare, very generous holidays, etc) once you have got into the system.

The other thing to add that was emphasised in the political sessions at the RPP (reinforcing the message that we're hearing a lot) is that the CNRS is very keen to encourage people from under-represented groups to apply and be hired. One of the ways they see to help this is to put pressure on the committees to hire researchers (even) earlier after their PhD, in order to reduce the length of the leaky pipeline.

Back to physics

Coming back to the RPP, this year was particularly well attended and had an excellent program of reviews of hot topics, invited and contributed talks, put together very carefully by my colleagues. It was particularly poignant for me because two former students in my lab who I worked with a lot, one who recently got a permanent job, were talking; and in addition both a former student of mine and his current PhD student were giving talks: this made me feel old. (All these talks were fascinating, of course!) 

One review that stood out as relevant for this blog was Bogdan Malaescu's review of progress in understanding the problem with muon g-2. As I discussed here, there is currently a lot of confusion in what the Standard Model prediction should be for that quantity. This is obviously very concerning for the experiments measuring muon g-2, who in a paper last year reduced their uncertainty by a factor of 2 to $$a_\mu (\mathrm{exp}) = 116 592 059(22)\times 10^{−11}. $$

The Lattice calculation (which has been confirmed now by several groups) disagrees with the prediction using the data-driven R-ratio method however, and there is a race on to understand why. New data from the CMD-3 experiment seems to agree with the lattice result, combining all global data on measurements of \(e^+ e^- \rightarrow \pi^+ \pi^- \) still gives a discrepancy of more than \(5\sigma\). There is clearly a significant disagreement within the data samples used (indeed, CMD-3 significantly disagrees with their own previous measurement, CMD-2). The confusion is summarised by this plot:

As can be seen, the finger of blame is often pointed at the KLOE data; excluding it but including the others in the plot gives agreement with the lattice result and a significance of non-zero \(\Delta a_\mu\) compared to experiment of \(2.8\sigma\) (or for just the dispersive method without the lattice data \( \Delta a_\mu \equiv a_\mu^{\rm SM} - a_\mu^{\rm exp} = −123 \pm 33 \pm 29 \pm 22 \times 10^{-11} \) , a discrepancy of \(2.5\sigma\)). In Bogdan's talk (see also his recent paper) he discusses these tensions and also the tensions between the data and the evaluation of \(a_\mu^{\rm win}\), which is the contribution coming from a narrow "window" (when the total contribution to the Hadronic Vacuum Polarisation is split into short, medium and long-distance pieces, the medium-range part should be the one most reliable for lattice calculations -- at short distances the lattice spacing may be too small, and at long ones the lattice may not be large enough). There he shows that, if we exclude the KLOE data and just include the BABAR, CMD-3 and Tau data, while the overall result agrees with the BMW lattice result, the window one disagrees by \(2.9 \sigma\) [thanks Bogdan for the correction to the original post]. It's clear that there is still a lot to be understood in the discrepancies of the data, and perhaps, with the added experimental precision on muon g-2, there is even still a hint of new physics ...

Monday, 26 September 2022

Pointless particle theorists?

The latest (twitter) storm in a teacup on this subject nearly prompted me to return to my soapbox, but now my favoured news source (not the Onion but the Grauniad) has an arcicle attacking particle theorists. Yet again, the usual suspect with books to sell and a career to save is dunking on particle physics, the supposedly finished branch of science that should give up and go home now that we found the Higgs boson because we can't prove there's a new particle just round the corner and isn't the Standard Model great anyway? The problem is, of course, that maverick (Top Gun Maverick!) outsiders revealing the "truth" that elites in an ivory tower are spending oodles of public money on failed ideas, and that we need a whistleblower to expose the fraud, gains such traction with the taxpaying public.

I even see other physicists defending this public anti-vax-style conspiracy-theory propagation as good for the field to rattle peoples cages and get people to question their assumptions. The sociology of this is quite interesting, because there are people within the field who either work on niche theories or just want to take down adjacent fields, and would like to see the popular paradigms brought down a peg or two, presumably naively believing that this will lead to more resources (human, cash or just attention) to be sent in their direction. But of course public disparaging of scientists can only ever lead to a general reduction of public trust and a shrinking of the pie for everyone. There exist so many internal mechanisms for theorists to (re)consider what they are working on, e.g.:

  • Grant proposals. The good thing about writing them is that they give you an opportunity to think deeply about what you really want to work on. Boring or uninnovative things just don't get funded. Of course, the evaluation systems may be terrible and biased and reward things not being too far from the reviewer's/panel's interests ... there is much room for improvement; but at least the writing part can be useful.
  • Researcher evaluations. At least here in France we must make a declaration of our activities and plans every year, and write a longer report every few years. This serves a similar purpose to the above.
  • New hires/promotions. Groups want to hire people who are working on interesting stuff. Hiring someone permanently is an endorsement of a field.
  • Citations, talk invitations etc. While people citing your work may just be because they are working on something similar, and people like to invite their friends for talks, sufficiently interesting or new work will persuade people to follow it up and garner attention.
  • These are all group mechanisms whereby scientists evaluate each other and what they are doing themselves. I am sure someone has studied the game theory of it; indeed as individual researchers trying to succeed in our careers we all have to adopt strategies to "win" and it is a shockingly competitive system at every stage. Of course, promoting a new idea can be difficult -- we are in a constant battle for attention (maybe writing a blog is a good strategy?) -- but if there is something really promising people will not ignore it. Ambulance chasing (where tens or hundreds of papers follow a new result) is a sign that plenty of people are ready to do exactly that. If a maverick outsider really had a great idea there would not be a shortage of people willing to follow. To take an example, if "the foundations of physics" really offered opportunities for rapid important progress, people would vote with their feet. I see examples of this all the time with people trying out Quantum Computing, Machine Learning, etc.

    I'll let you in on a secret, therefore: the target of the bile is a straw man. I don't know anyone hired as a BSM model builder in recent years. People became famous for it in the 90s/early 00s because there was no big experiment running and the field was dreaming big. Now we have the LHC and that has focussed imaginations much more. People now hired as phenomenologists may also do some ambulance chasing on the side, but it is not their bread and butter. Inventing models in theory is usually a difficult and imaginative task, aimed at connecting often disparate ideas, but it's not the only task of a phenomenologist: the much bigger ones are understanding existing ones, and trying to connect theory to experiments!


    In defence of ambulance chasing (retch)

    When an experiment announces something unexpected (as happens quite frequently!) what is the correct response? According to our outsider, presumably we should just wait for it to go away and for the Standard Model to be reaffirmed. People in the field instead take the view that we could be curious and try to explain it; the best ideas come with new features or explain more than one anomaly. What should we do with wrong explanations? Should we be punished for not coming up with proven theories? Do we need external policing of our curiosity? What does ambulance chasing really cost? The attraction for many departments to form a theory group is that they are cheap -- theorists don't need expensive experiments or engineers/technicians/people to wash the test tubes. The reward for coming up with a failed theory is usually nothing; but it costs almost nothing too. So why the bitterness? Of course, we can begrudge people becoming famous for coming up with fanciful science fictions -- the mechanisms for identifying promising ideas are far from perfect -- but usually they have come up with something with at least some degree of novelty.

    When looking at CVs, it's very easy to spot and discount 'ambulance citations.' By the way, another phenomenon is to sign 'community papers' where tens or hundreds of authors group-source a white paper on a popular topic; and a third is to write a review of a hot subject. Both of these work very well to generate citations. Should we stop doing them too? In the end, the papers that count are ones with an interesting result or idea and there is no sure mechanism to writing them. In the aftermath of every ambulance-chasing cycle there are almost always papers that have some interesting nugget of an idea in, something that remains that would not have been suggested otherwise or computatations done that would otherwise not have been thought of, and hopefully brings us closer to discoveries.

    Recent progress

    We have an amazing collider experiment -- the LHC -- which will run for another ten years or so at high luminosities. We can either take the view in advance that it will tell us nothing about the energy frontier, or we can try to make the most of it. The fundamental problems with our understanding of physics have not been solved; I wrote a response to a similar article in 2020 and I stand by my opinion of the state of the field, and you can look there for my laundry list of problems that we are trying to make sense of. What has changed since then? Here are just a few things, biased by my own interests:

    Muon g-2

    The measurement of the muon g-2 by Fermilab confirmed the earlier anomalous measurement. Instead, now we have the problem that a series of lattice QCD groups have a calculation that would imply that the Standard Model prediction is closer to the measurement, in contradiction with the R-ratio method. Someone has underestimated their uncertainties, but we don't know who! This is a problem for theorists working with the experiments; perhaps the new experiment "mu on E" will help resolve it?

    CDF measurement of the W mass

    As reported everywhere, the CDF experiment at the Tevatron (the previous energy frontier collider that shut down ten years ago) analysed its data and found a measurement of the mass of the W boson with an enormous disagreement with the Standard Model of 7 standard deviations. If confirmed, it would signal new physics around the TeV scale. Since the W boson mass is just about the most generic thing that can be modified by new particles near the electroweak scale, there are any number of new theories that can explain it (as the arXiv this year will attest). Here there is a 4 standard deviation tension with a measurement at the LHC which has a much larger uncertainty. Another LHC measurement is now needed to settle the issue, but this may take a long time as it is a difficult measurement to make at the LHC. (Maybe we should just not bother?). Other than lots of fanciful (and dull) model building, this has recentred theory efforts on how to extract information from the W boson mass in new theories, which is a problem of precision calculations and hugely interesting ...

    The Xenon-1T anomaly disappeared

    Xenon released new results this summer showing that the anomaly at low recoils they found had disappeared with more data. While this immediately killed many theories to explain it, the lasting effects are that people have given serious thought to low-mass dark matter models that could have explained, and come up with new ways to search for them. Without looking, we don't know if they are there!

    An anomaly in extragalactic background light was found

    A four standard deviation anomaly was reported in the extra-galactic background light (EBL), i.e. there is too much light coming from outside galaxies in every direction! This would naturally be explained by an axion-like particle decaying -- indeed, measurements of the EBL have long been used as constraints. (Maybe we should never have come up with axions?)

    The LHC reported three other anomalies

    In analysing the data of run 2, three different searches reported anomalies of three standard deviations. Explanations for them have been suggested; perhaps we should see if they are correlations with other searches, or new ways of corroborating the possible signals? Or just not bother?

    Run 3 has started

    Run 3 of the LHC has started with a slightly higher energy and then stopped due to technical problems. It will be some time before significant luminosity is collected and our experimentalists are looking at new types of searches that might lead to discoveries. Their main motivation is that new signatures equals new reach. Our colleagues certaintly need justification or interpretations for their results, but whether the models really offer explanations of other types of new physics (e.g. dark matter) is of course a concern it is not the main one. The reason to do an experiment at the LHC is curiosity based -- experimentalists are not children looking for theorists' latest whims. The point is that we should test the theories every which way we can because we don't know what we will find. A good analogy might be zoologists looking for new species might want to got o a previously unexplored region of the earth, or they might find a new way of looking at ones they've been to before, e.g. by turning over rocks that they would otherwise have stepped over.

    Long Lived Particles

    One of these classes is long lived particles (LLPs)-- that I have written about before on here -- and they have also caught the imagination of theorists. In fact, I'm working with experimentalists with the aim of making their searches more widely applicable.

    SMEFT

    Two years ago I wrote that I thought the field was less inclined to follow hot topics and that this is healthy. This is still the case. However, some hot topics do exist and one of these is the theory of the Standard Model Effective Field Theory. There is now rapid development of all manner of aspects, from phenomenology to exploration of the higher-order version to matching etc.

    Machine Learning

    Another example is machine learning, which is becoming more prevalent and interesting, especially its interface between theory and experiments.

    Of course, there are many more developments and I'm sure many I'm not aware of. Obviously this is a sign of a field in big trouble!

    Sunday, 10 January 2021

    Recasting a spell

    For three successive Januaries now, since I started this blog in 2018, I posted a list of the things to look forward to, which for whatever reason didn't materialise and so were essentially repeated the next year. Given the state of the world right now some positive thinking seems to be needed more than ever, but it would be a bit of a joke to repeat the same mistake again. In particular, the measurement of the muon anomalous magnetic moment (which is apparently all I blog about) has still not been announced, and I'm led to wonder whether last year's controversies regarding the lattice QCD calculations have played a role in this, muddying the water.

    Instead today I want to write a little about an effort that I have joined in the last couple of years, and really started to take seriously last year: recasting LHC searches. The LHC has gathered a huge amount of data and both main experiments (CMS and ATLAS) have published O(1000) papers. Many of these are studying Standard Model (SM) processes, but there are a large number with dedicated searches for New Physics models. Some of them contain deviations from the predictions of the Standard Model, although at present there is no clear and sufficiently significant deviation yet -- with the obvious exception of LHCb and the B-meson anomalies. Instead, we can use the data to constrain potential new theories.

    The problem is that we can't expect the experiments to cover even a significant fraction of the cases of interest to us. For an important example, the simplest supersymmetric models have four 'neutalinos' (neutral fermions), two 'charginos' (charged fermions), plus scalar squarks and sleptons -- and two heavy neutral Higgs particles plus a charged Higgs; it is clearly impossible to list in a few papers limits on every possible combination of masses and couplings for these. So the experiments do their best: they take either the best-motivated or easiest to search for cases and try to give results that are as general as possible. But even then, supersymmetric models are just one example and it is possible that a given search channel (e.g. looking for pair production of heavy particles that then decay to jets plus some invisible particles as missing energy) could apply to many models, and it is impossible in a given paper to provide all possible interpretations.

    This is where recasting comes in. The idea is to write a code that can simulate the response of the relevant LHC detector and the cuts used by the analysis and described in the paper. Then any theorist can simulate signal events for their model (using now well-established tools) and analyse them with this code, hopefully providing a reasonably accurate approximation of what the experiment would have seen. They can then determine whether the model (or really the particular choice of masses and coupling for that model) is allowed or ruled out by the particular search, without having to ask the experiments to do a dedicated analysis.

    Ideally, recasting would be possible for every analysis -- not just searches for particular models, but also Standard Model searches (one example which I have been involved in is recasting the search for four top-quarks which was designed to observe the Standard Model process and measure its cross-section, but we could then use this to constrain new heavy particles that decay to pairs of tops and are produced in pairs). However this is a lot of work, because theorists do not have access to the simulation software used for the experiments' own projections (and it would probably be too computationally intensive to be really useful anyway) so the experiments cannot just hand over their simulation code. Instead there is then a lot of work to make approximations, which is why there is really some physics involved and it is not just a mechanical exercise. Sometimes the experiments provide pseudocode which include an implementation of the cuts made in the analysis, which helps understanding the paper (where there is sometimes some ambiguity) and often they provide supplementary material, but in general getting a good recast is a lot of work.

    In recent years there has been a lot of effort by both experimentalists and theorists to make recasting easier and to meet in the middle. There are lots of workshops and common meetings, and lots of initiatives such as making raw data open access. On the theory side, while there are many people who write their own bespoke codes to study some model with some search, there are now several frameworks for grouping together recasts of many different analyses. The ones with which I am most familiar are CheckMATE, Rivet, ColliderBIT and MadAnalysis. These can all be used to check a given model against a (non-identical) list of analyses, but work in somewhat different ways -- at their core though they all involve simulating signals and detectors. There is therefore a little friendly competition and very useful cross-checking. Then there are other useful tools, in particular SModelS, which instead cleverly compares whatever model you give it to a list of simplified model results given by the LHC experiments, and uses those to come up with a much faster limit on the model (of course this loses in generality and can fall prey to the assumptions and whims of the particular simplified models results that are available, but is nonetheless very useful).

    So the reason for the post today is the paper I was just involved in. It is a proceedings of a workshop where a group of people got together to recast a bunch of the latest LHC searches in the MadAnalysis framework. I didn't take part in the workshop, but did recast a search which was useful for a paper last year (if you are interested, it involves the signal in the inset picture)
    so there are now 12 new reinterpretations of some of the latest LHC searches. This should be useful for anyone comparing their models to data. You can see by the number of authors involved how labour-intensive this is -- or check out the author list on last year's white paper; there are still many more searches from Run 2 of the LHC that are yet to be recast, so we still have our work cut out for some time to come before there is any new data!

    If you are interested in the latest developments, there will be a forum next month.

    Sunday, 4 October 2020

    Zoom on the Universe

    Like everyone else in the field I've been enjoying the new benefits and disadvantages of Zoom meetings since the lockdown in March. They've given access to talks from all around the world, and made it even easier to carry on working on my own stuff while ignoring someone else talking in the background (it's amazing how motivating being in a talk can be).

    However, as part of the Fête de la Science, this week some eminent members of the particles and strings groups at my lab are getting together to do an Ask Me Anything for anyone - adults or chidren - interested to ask questions about the deep questions in the universe. They'll be on Monday/Wednesday/Friday at 17h30 CET. The first two will be in French, and the last one in English. I'll be taking part in the one on Friday, mainly putting on my string theory cap. We had a rehearsal last week which was a lot of fun, I'm hoping the real thing will be even better.

    Please circulate the link here and visit us to ask anything you ever wanted to know about string theory, cosmology, dark matter, physics Beyond the Standard Model, ... from researchers working at the cutting edge!

    Tuesday, 10 March 2020

    Muon g-2: lattice salad

    Muon g-2: lattice salad

    A couple of weeks ago, a new lattice QCD calculation by a group known as BMW tried to tip the BSM community into depression by reporting that they could resolve the tension between theory and experiment. Their new result had a tiny uncertainty (of 0.6%), much smaller than any previous lattice computation.

    As I've mentioned here several times, the anomalous magnetic moment of the muon is one of the most precisely measured quantities in the world, and its prediction from theory has for several years been believed to be slightly different from the measured one. Since the theory was thought to be well understood and rather "clean", with uncertainty similar to the experimental one (yet the two values being substantially different) it has long been a hope that the Standard Model's cracks would be revealed there. Two new experiments should tell us about this, including an experiment at Fermilab that should report data this year with potentially four times smaller experimental uncertainty than the previous result; An elementary decription of the physics and experiment is given on their website.

    However, there were always two slightly murky parts of the theory calculation, where low-energy QCD rears its head appearing in loops. A nice summary of this is found in e.g. this talk from slide 33 onwards, and I will shamelessly steal some figures from there. These QCD loops appear as

    Hadronic light-by-light, and
    hadronic vector polarisation (HVP) diagrams.

    The calculation of both of these is tricky, and the light-by-light contribution is believed to be under control and small. The disagreement is in the HVP part. This corresponds to mesons appearing in the loop, but there is a clever trick called the R-ratio approach, where experimental cross-section data can be used together with the optical theorem to give a very precise prediction. Many groups have calculated this with results that agree very well.

    On the other hand, it should be possible to calculate this HVP part by simulating QCD on the lattice. Previous lattice calculations disagreed somewhat, but also estimated their uncertainties to be large, comparable to the difference between their calculations and the experimental value or the value from the R-ratio. The new calculation claims that, with their new lattice QCD technique, they find that the HVP contribution should be large enough to remove the disagreement with experiment, with a tiny uncertainty. The paper is organised into a short letter of four pages, and then 79 pages of supplementary material. However, they conclude the letter with "Obviously, our findings should be confirmed –or refuted– by other collaborations using other discretizations of QCD."

    Clearly I am not qualified to comment on their uncertainty estimate, but if the new result is true then, unless there has been an amazing statistical fluke across all groups performing the R-ratio calculation, someone has been underestimating their uncertainties (i.e. they have missed something big). So it is something of a relief to see an even newer paper attempting to reconcile the lattice and R-ratio HVP calculations, from the point of view of lattice QCD experts. The key phrase in the abstract is "Our results may indicate a difficulty related to estimating uncertainties of the continuum extrapolation that deserves further attention." They perform a calculation similar to BMW but with a different error estimate; they give a handy comparison of the different calculations in this plot:

    The new result is LM 2020 (with BMW 2020 the result from two weeks ago). Of course this cannot be the final word, and my personal bias makes me hope without justification that it is the latter paper that is correct; it is certainly interesting times for these lattice computations!

    Update 12/03/20: A new paper yesterday tries to shed some new light on the situation: apparently it has been known since 2008 that an HVP explanation of the muon anomalous magnetic moment discrepancy was unlikely, because it leads to other quantities being messed up. In particular, the same diagrams that appear above also appear in the determination of the electroweak gauge coupling, which is precisely measured at low energies from Thomson scattering, and then run up to the Z mass: $$ \alpha^{-1} (M_Z) = \alpha^{-1} (0) \bigg[ 1 - ... - \Delta \alpha^{(5)}_{\mathrm{HVP}} (M_Z) + ... \bigg] $$ where the ellipsis denotes other contributions. Adding the BMW lattice contribution there at low energies and extrapolating up, the new paper finds that the fit is spoiled for the W-boson mass and also an observable constructed from the ratio of axial and vector couplings to the Z-boson: $$ A_{\ell} = \frac{2 \mathrm{Re}[ g_V^{\ell}/g_A^{\ell}]}{1 + (\mathrm{Re} [g_V^{\ell}/g_A^{\ell}])^2}$$ The key plot for this observable is:

    Thursday, 6 February 2020

    Brexit and me

    When I set up this blog I imagined that I would post things about my Brexit experiences in the build up to it actually happening. In the end it seemed like everything that could be said about it was written elsewhere, and in terms of consequences for me personally there was so much uncertainty about what would actually happen that it was not worth it. In the aftermath of Brexit day I received an email from the French government stating that, having already applied for a titre de séjour, I do not have to do anything for now and should not go to my local préfecture: you get the impression they are just annoyed by the whole Brexit thing and are trying to avoid hoards of confused anglais turning up demanding documents. I spent the weekend feeling a sense of loss, which seems to have replaced the rollercoasters of anger and hope of the last years. So I now feel that it's time to share my thoughts.

    In case you can't guess, since I'm a Brit living and working in France who will be directly impacted in myriad ways by Brexit, I am vigorously opposed to it, but I would have been so even if I had never emigrated.  In the UK, there has always been a nasty nationalistic undercurrent among a sizeable minority, with foreigners (especially French) being contemptible/lazy/inferior, and it seemed you could never mention Germany without someone referring to Nazis, "Two World Wars and one World Cup," etc, but then my teenage experiences visiting France and Germany made me realise how blinkered this was and that as Europeans we have so much more in common than we have differences. These formative experiences came not long after the founding of the EU in '92, when there was also a strong sense that all of Europe was coming together to work in a common interest and be stronger together, and Europe was being referred to positively in at least part of the public conversation in the UK.

    In my field, there are not many British people actually working in Europe. (I often wonder if this is true, and it may just be my perception -- I'd be interested to see some statistics). I think this starts with the fact that very few British students seem to go abroad for their PhDs. Then there are a large number of PhD students trained at UK universities, but they start their doctoral training at a (much) younger age than their European counterparts, and a PhD in the UK is 3, or if they are lucky, 4 years, rather than 5+ in the US. This means that only a small proportion of UK PhD students are actually competitive internationally when they are applying for postdocs: they often just do not have the same level of experience or publications. Partly it will also be because the UK has a large (and very international) academic jobs market, so more British people tend to be absorbed back there than elsewhere for permanent jobs.

    ... anyway, I have got used to being different, and I enjoy it, even as I feel and try to become more and more native as time goes on. The French system is quite open to non-French permanent researchers, so my lab is rather international. But I am currently the only Englishman, and I don't form part of any sort of British enclave or cabal. So as regards Brexit, I have been somewhat insulated from the apparently toxic atmosphere in the UK for the past three and a half years. Instead I have had a certain stifling sense of worry about the future, because I am one of the people who will be directly affected by the elimination of freedom of movement, and who the British government seems indifferent to (not least because my right to vote in the UK will expire). At the moment this is coupled with anxiety about the French government's pension reforms (which will almost certainly affect me rather severely -- it's perhaps damning that it is still not clear what the reform will actually do in detail) and reforms of the funding of research (the LPPR which aims to install "academic darwinism") ...

    So what do French people, and more specifically French academics, think of Brexit? Well, for the large part it is both viewed as tragic and hilarious. It seems to have dispelled talk about "Frexit" (even among the hard-right Rassemblement National). And while at first I was debating Brexit most days with colleagues, now it is regarded as old news and barely merits headlines. People were far more interested in the goings-on in Parliament before Christmas, with my Italian colleagues crowing that they had been overtaken as the country with the most dysfunctional politics. On Friday 31st January when it was mentioned on the radio, they quickly segued into the old anecdote about how "God save the King/Queen" was actually written to celebrate the successful recovery of Louis XIV from surgery for an anal fistula, then stolen by (German-born) Handel and translated for an English audience. The other mention in the news was the storm in a teacup when Guernsey forbade French boats from fishing in their waters; they climbed down immediately when the French ports refused Guernsey boats access.

    On a practical level, as I mentioned at the beginning, for the time being I will need a titre de séjour to prove that I can stay, or more specifically exchange the EU one that I was granted last year for some new Brexit card. The French government has set up a brexit website to help inform people of their rights, but since nothing has been firmly decided beyond the transition period the uncertainty will carry on -- although there is probably less uncertainty here about what will actually happen than in the UK. Indeed, I get the impression that there is a relief that perfidious Albion will be out of the European decision-making process, and there is an opportunity to attract businesses and people here. In particular, there were 10 special positions opened in the CNRS across all disciplines this year, which were unofficially intended to attract people fleeing the UK. I have heard of cases of academics doing exactly that already.



    Wednesday, 29 January 2020

    RPP 2020

    Yesterday I finished the teaching in my SUSY course for this academic year. I talked (among many other things) about going beyond the MSSM (the Minimal Supersymmetric Standard Model) and modern perspectives on the future of SUSY phenomenology. To add a little from the post a couple of weeks ago, I presented three approaches that have been embraced by the community for a few years now:

    1. Carry on looking for the MSSM. As I said before, the LHC has done a good job of limiting the superpartners that are coupled to the strong force, but in reality a rather poor job for electroweak-charged states. There is also a good argument that the Higgs mass alone suggests we may never see the coloured particles without a new collider anyway, but this is not watertight.
    2. Look at non-minimal supersymmetric models. This is the approach I have favoured in my own work (in particular Dirac gaugino models).
    3. Abandon a complete SUSY theory at low energies, and look instead at high-scale SUSY or split SUSY. In particular, the latter allows you to keep gauge coupling unification and a natural dark matter candidate. On the other hand, it seems hard to find in string theory, because of the need for an approximate R-symmetry.
    In my earlier post, I stated that I would not recommend that new students exclusively study SUSY, and indeed I do not propose SUSY phenomenology as the main focus of my new students. This is at least partly a sociological statement: they would struggle to find a career in the current climate, and I strongly believe that it is important to know at least something about SUSY. But it is even more vital to learn about all of the problems of the Standard Model and the many potential solutions, and look for the most promising ways to make progress based on current and future experiments in an open-minded way.

    Rencontres de Physique des Particules 2020

    I'm currently at the first day of the annual French particle theory meeting. There will be some nice political discussions alongside interesting talks that represent a little of the field. Notably, in physics Beyond the Standard Model there have been talks today about indirect dark matter searches, axions and the tension in the Hubble constant, by people recruited in recent years to the CNRS; there will be more talks tomorrow and Friday by recent recruits and people hoping to be recruited (this meeting often serving as a shop window).

    France has a unique way of funding research, in that the CNRS hires people to work solely on research, and supports them in "mixed labs" where there are also university professors and "maîtres de conferences" who are the equivalent of assistant or associate professors elsewhere. Unlike their university counterparts, CNRS researchers do not have to teach, and have a huge amount of liberty. For this last reason I absolutely love my job. The French system also believes in recruiting people relatively early in their careers but requires good judgement in finding the stars of tomorrow rather than people who are already established. So at this meeting you could say there is sampling of what the CNRS committee may believe (or what some people hope they believe) is the future of the field ...

    Another feature of the French system is, perhaps paradoxically in the land of "liberté, egalité, fraternité," that it is ultra-elitist. The "grandes écoles" (in particular Ecole Normale and Ecole Polytechnique) are incredibly selective institutions for students, but most people outside of France have not heard of them because they barely register on the Shanghai rankings -- but only because they are small (they punch incredibly hard for their weight, and top tables of "small universities). One interesting thing we heard today, from the Vice Provost for Research at Ecole Polytechnique, was that the government aims to make Ecole Polytechnique into a French version of MIT, which would mean doubling the number of students -- but multiplying the budget by a factor of ten (this is probably a slightly unfair calculation, as it would not include the salaries of CNRS researchers, for example). Apparently the way they are trying to achieve this is to "work out how to get money out of" large multinationals, essentially using the students' skills as a "goldmine." Sadly these companies are not at all interested in basic research, and so funding for future fundamental science would have to be somehow siphoned off from that obtained to do machine learning etc.

    Ah, I've just veered into cynicism, which I want to avoid on this blog, so I better go and take part in the "table ronde" discussion and save real politics for a different post ...