Thursday, 3 July 2025

The Ant Mill

Jesper Grimstrup kindly sent me an electronic copy of his new book, The Ant Mill. He was also kind enough to give me some feedback on a first version of this review.


It has a foreword by Peter Woit, who has commented briefly about the book on his blog; the author also has a substack. The subtitle is 'How theoretical high-energy physics descended into groupthink, tribalism and mass production of research' so you would expect it to be the sort of thing that I would take a strong objection to. However, I am the sort of person who likes to read things that challenge me; the only thing that gets under my skin in this book is attacking the whole of academia in public.

The story is an interweaving of the author's personal experiences in academia with his general observations. This personal story and his experiences are interesting, much like I expect those of everyone who has spent several years in academia would be. And he has clearly spent a lot of time thinking about thinking about research. I love meta-activities of this sort; the best example that I know of is You and Your Research by Hamming, which I stumbled on as a postdoc. Indeed, the existence of these sorts of things that are shared by young researchers is actually evidence against the central thesis of Grimstrup's book.

The market attacking High-Energy Physics seems to be burgeoning. On the one hand Hossenfelder believes that we have become 'lost in math,' and on the other Woit believes we are not mathematical enough; both attack string theory as a failed program. Grimstrup's book is in the mathematical camp, with the novelty that he piles scorn on all popular approaches to quantum gravity, in particular loop quantum gravity and noncommutative geometry, since he has come into closest contact with them. His observations about string theorists are mainly about the shoddy way that he was treated during his time at the NBI, with several egregious examples of bad behaviour. We are lead to conclude that it is not just string theorists who have formed a closed tribe, but that there are several such groups crowding out innovation.

One problem with the combination of the general and the personal is that Grimstrup constantly refers to his own research program and gives examples of how it has just generally been ignored within academia. For example, he starts the book with a copy of a grant application by a 31-year-old Niels Bohr for an entire institute, and contrasts this with a grant application of his that was refused that effectively ended his career within academia (my understanding is that at the NBI in Copenhagen it is common to ask for and obtain grants to pay your own salary and prolong temporary contracts). He writes that he does not do this to compare himself to Niels Bohr, but he does this repeatedly (mostly indirectly) throughout the book -- not in a self-aggrandising way, but in the sense that you can almost feel his frustration coming through the pages that his expectations did not meet reality. It seems like bait at times, inviting anyone who disagrees with the general thesis to attack him personally. Instead, I will have a look at his papers with an open mind, after writing this review, and keep my thoughts on them to myself.

The book made me think of how many of us enter academia. We grow up reading popular science accounts idolising physicists from a century ago. And it made me think more of the self-actualisation messages that were rammed down all our throats in the popular culture in the 80s and 90s: follow your dreams, stick to your principles, be true to yourself, this is the most important thing in life and you shouldn't worry about money, just be happy. And: working hard and getting good grades is the way to get to the top. The problem is that this is largely obsolete: it's based on the world that existed post world war two when there was a scarcity of labour and an economic boom. Then -- if you were from the right background and your face fit -- you could work hard, get a PhD and walk into a permanent academic job (yes this is a caricature). Science was respected and so were scientists; high-energy physics was at the top of the tree because of the connection with technological advancements and nuclear weapons. That world doesn't exist any more; while in many ways for the better, it is undeniable that we live in a world of much greater competition and public skepticism about science is increasing.

The scientific community has expanded, as has the population; and more importantly education throughout the world and global travel and communication has meant that the number of people around the world who are involved in research is much greater than it was. Grimstrup notes that increasing the size of the academic community has led to fundamental changes of behaviour: professionalisation of research and group think. It is clearly true that the Matthew effect exists in many branches of society, and therefore also in academia; governments wanting to exert some form of oversight in exchange for the funds that they provide has definitely led to changes in incentives for researchers. One aspect of this is that it is hard to judge the work of people from other fields, but we are required to do so; and then it is difficult to argue with quantitative measures such as number of papers, citations, h-indices. Then of course the measure becomes the target for certain people. 

Grimstrup rails against all these changes; he clearly believed that the correct thing to do for an aspiring researcher would be to work on their own ideas, stick to their principles and not compromise. They should work for a long time, in isolation, on a major paper, put it on arxiv.org and the next day their colleagues would read it and ask interesting questions about them. Fame and fortune would follow. The thing that shocked Grimstrup was that not only did people not even care about any papers he posted, a young competitor even once told him some ideas are simply not worth pursuing even though they may be interesting. For sure, this is horrible and shocking behaviour, and does not reflect well on the anonymous person who said it.

For my part I am still naive enough to think that if new ideas are good, someone will recognise them as such, and network effects will make them known. I know that many researchers already think more deeply about what they are doing than he gives us credit for: and we discuss it, during seminars, over a drink with colleagues, in the coffee-breaks of conferences, during our annual or five-year reviews, or in grant applications. When I discussed this review with a string-theorist colleague they remarked "of course we know the situation sucks!''  I think Grimstrup is therefore wrong to tar everyone with the same brush: the diversity in our community has increased greatly with time, and this means that there are indeed strong incentives to take a risk on a novel idea, because the rewards of opening a new research direction are immense! Being the originator of an idea, or the first to recognise the merit in even an old forgotten idea, can yield tremendous results and even greater recognition nowadays thanks to the same effects. Hence, starting a new field, or even a subfield, is something that most researchers aspire to; the rewards for doing so are even greater now than in times gone by, and the evidence that this is possible is even given in this book: the existence of several communities working on different approaches to quantum gravity. He argues that these are now old and stale, but my point is that the way that they were able to take root at all is an example of how this can happen. There are many subfields that have sprung up more recently, and in other branches of HEP there are of course many examples. Nowadays things can change very quickly: a new good idea will be very rapidly jumped on once it is recognised, and people are constantly on the lookout. 

Grimstrup also, like Lee Smolin, divides researchers into visionaries and technicians. He then complains that the technicians have taken over, with lots of disparaging comments about them digging endless holes. He then complains that there is an incentive to collaborate in modern research, only collaborators survive in the system: he has evidence that being a lone wolf is a poor survival strategy. He believes that we should work on our own; yet at the same time visionaries need to collaborate with technicians. I found this very jarring. Other than the facile placing of people into boxes, he is overlooking the benefits of collaboration -- his opinion is that it is just about inflating the number of papers one person can sign (and for sure there are people who cynically do this). But to me, discussing with other people, even just explaining something, is often the quickest way to generate genuinely new ideas or solutions to problems that we may never have come up with alone. At the same time, there are plenty of people who do write papers alone; to take a leaf from his book and share a personal story, I once had a comment on a postdoc application that I had no single-author papers and therefore did not demonstrate independence. Hence, there are incentives and a good reason for young researches to work alone sometimes. I then wrote a single-author paper, as I have occasionally done since (and got the fellowship next time I applied); I would agree that there is a pleasure and some advantages in doing this, but to do this all the time would mean I would risk missing out on lots of new ideas and other perspectives, as well as the pleasure of regular interactions with collaborators, and it would also limit the scope of my projects, where I benefit from others' expertise. Or collaborations may just be working with a student, pursuing my ideas (hopefully they contribute some of their own!) and imparting my knowledge in the process. This is why I do not think that encouraging people to predominantly cloister themselves away to work alone for a long time is the most productive or healthy one. 

The book also has a very narrow focus as to the goal of high-energy physics. For the author, the quest is a "the next theory," but in essence this means a theory of quantum gravity, which he acknowledges would be far from being able to be tested with any present or near-future data. Otherwise, we should look for a mathematically rigorous definition of quantum field theory; he hopes these will be one and the same thing. This latter problem has proven to be both very hard and not obviously useful -- it is certainly not obvious that the solution should even be unique, for example a theory of strings would cure ultra-violet divergences, and the question of whether strings should be necessary for such a theory is one that I know people have tried to explore. I also recently attended a talk by Michael Douglas where he reviewed recent attempts on rigorous QFT, so it is a subject that is regarded as important but very difficult, and still being explored by a small number of people. Regarding quantum gravity, some people in the community have taken the opinion that if you have no data, it is not a good problem, and are working on other things. Or people try to make contact with data using e.g. EFT approaches to measuring quantum effects of gravity. The string theory community might say that we do have a theory of quantum gravity, in fact we have a landscape of them, and try e.g. to use it to answer questions about black hole information. But at the same time some people then complain that the leading string theorists have moved on to other things: there are lots of important open fundamental problems, and we just do not know how they are interlinked, if at all!

Grimstrup's insistence that the solution to what he sees as problems is to shrink competition and also encourage research outside of academia, reminded me of another Dane, subject of another book I read recently: king Cnut, famous for (presumably apocryphally) standing on the beach in front of his ministers and commanding the tide to turn back. Otherwise Grimstrup hopes for a crisis, perhaps one provoked by his book. He explicitly states that he does not want to fuel the anti-establishment or ant-academic movements, but I suspect that the only crises we might suffer would not be good for the field.  Perhaps one is already taking place in the US; perhaps people will take his message to heart despite his protests and start a DOGE-style decimation of research. Necessarily, in science we mark our own homework: only other scientists are capable of judging the claims of their peers. If we start opening this up to question then we will only end with government appointees deciding what are acceptable topics and directions, or shutting public funding down altogether. What would be left over would surely be even greater competition for scarce resources.

For me, the solution to the problems in the book, to the extent that I agree with them, is to regularly remind ourselves that we should always maintain a childlike curiosity and not close our minds to new ideas and new possibilities. This is the message from the text of Hamming, and very well put in the writings of Feynman (who Grimstrup bizarrely dismisses as a technician compared to Bohr). Otherwise of course in science it is necessary to have a community spirit, to realise that we are all trying to make progress in the best way we know how, and to help each other do so; and it is necessary to maintain healthy competition as a motivator. But both conflicting instincts -- to compete and to group into communities -- are vital parts of human nature and denying this has been the mistake of utopians throughout history. 

I am also sure that many of the complaints that Grimstrup assigns to high-energy physics could also be applied to society more generally. So instead of trying to hold back or reverse the societal changes of the last century we should try to work with them as best we can. We have to accept that we live now in an attention economy; and this gives new opportunities: blogging, social media, writing articles in science magazines or popular press, etc. Since Grimstrup is now, interestingly, an independent scientist, perhaps tying his own research program so closely with his book is embracing the modern world at last, and creating a brand as a radical outside thinker, that will be attractive to private backers. He promotes the path that he has followed, crowdfunding his research or seeking support of patrons, as a possible path for the independently minded once they have completed their training in academia, and in this I wish him well: he is clearly serious, determined and sincere. But while this is now part of twenty-first century society, many people have noticed that this modern trend is a return to the nineteenth century (or even earlier, e.g. Leonardo da Vinci being invited to France by François 1) where a wealthy patron was the only source of funding. 



Sunday, 11 May 2025

Choose France for Science

In the news this week was the joint announcement by the presidents of the European Commission and France of initiatives about welcoming top researchers from abroad, with the aim being especially to encourage researchers from the USA to cross the Atlantic. I've seen some discussion online about this among people I know and thought I'd add a few comments here, for those outside Europe thinking about making such a jump.

Firstly, what is the new initiative? Various programmes have been put in place; on the EU side it seems to be encouraging applications to Marie Curie Fellowships for postdocs and ERC grants. It looks like there is some new money, particularly for Marie Curie Fellowships for incoming researchers. Applying for these is generally good advice, as they are prestigious programs that open the way to a career; in my field a Marie Curie often leads to a permanent position, and an ERC grant is so huge that it opens doors everywhere. In France, the programme seems to be an ANR programme targeting specific strategic fields, so unlikely to be relevant for high-energy physicists (despite the fact that they invited Mark Thomson to speak at the meeting). But France can be a destination for the European programmes, and there are good reasons for choose France as a destination. 

So the advice would seem to be to try out life in France with a Marie-Curie Fellowship, and then apply through the usual channels for a permanent position. This is very reasonable, because it makes little sense to move permanently before having some idea of what life and research is actually like here first. I would heartily recommend it. There are several permanent positions available every year in the CNRS at the junior level, but because of the way the CNRS hiring works -- via a central committee, that decides for positions in the whole country -- if someone leaves it is not very easy to replace them, and people job-hopping is a recurrent problem. There is also the possibility for people to enter the CNRS at a senior level, with up to one position available in theoretical physics most years. 

I wrote a bit last year where I mentioned some of the great things about the CNRS but I will add a bit now. Firstly, what is it? It is a large organisation that essentially just hires permanent researchers, who work in laboratories throughout the country. Most of these laboratories are hosted by universities, such as my lab (the LPTHE) which is hosted by Sorbonne University. Most of these laboratories are mixed, meaning that they also include university staff, i.e. researchers who also teach undergraduates. University positions have a similar but parallel career to the CNRS, but since the teaching is done in French, and because the positions only open on a rather unpredictable basis, I won't talk about them today. The CNRS positions are 100% research; there is little administrative overhead, and therefore plenty of time to focus on what is important. This is the main advantage of such positions; but also the fact that the organisation of researchers is done into laboratories is a big difference to the Anglo-Saxon model. My lab is relatively small, yet contains a large number of people working in HEP, and this provides a very friendly environment with lots of interesting interactions, without being lost in a labyrinthine organisation or having key decisions taken by people working in vastly different (sub) fields. 

The main criticisms I have seen bandied around on social media about the CNRS are that the pay is not competitive, and that CNRS researchers are lazy/do not work. I won't comment about pay, because it's difficult to compare. But there is plenty of oversight by the CNRS committee -- a body of our peers elected by all researchers -- which scrutinises activity, in addition to deciding on hiring and promotions. If people were really sitting on their hands then this would be spotted and nipped in the bud; but the process of doing this is not onerous or intrusive, precisely because it is done by our peers. In fact, the yearly and five-yearly reports serve a useful role in helping people to focus their activities and plan for the next one to five years. There is also evaluation of laboratories and universities (the HCERES, which will now be changed into something else) that however seems sensible: it doesn't seem to lead to the same sort of panic or perverse incentives that the (equivalent) REF seems to induce in the UK, for example. 

The people I know are incredibly hard-working and productive. This is, to be fair, also a product of the fact that we have relatively few PhD students compared to other countries. This is partly by design: the philosophy is that it is unfair to train lots of students who can never get permanent positions in the field. As a result, we take good care of our students, and the students we have tend to be good; but since we have the time, we mostly do research ourselves, rather than just being managers. 

So the main reason to choose France is to be allowed to do the research you want to do, without managerialisation, bureaucrats or other obstacles interfering. If that sounds appealing, then I suggest getting in touch and/or arranging to visit. A visit to the RPP or one of the national meetings would be a great way to start. The applications for Marie Curie fellowships are open now, and the CNRS competition opens in December with a deadline usually in early January. 

Sunday, 5 January 2025

Making back bacon

As a French citizen I should probably disavow the following post and remind myself that I have access to some of the best food in the world. Yet it's impossible to forget the tastes of your childhood. And indeed there are lots of British things that are difficult or very expensive to get hold of in France. Some of them (Marmite, Branston pickle ...) I can import via occasional trips across the channel, or in the luggage of visiting relatives. However, since Brexit this no longer works for fresh food like bacon and sausages. This is probably a good thing for my health, but every now and then I get a hankering for a fry-up or a bacon butty, and as a result of their rarity these are amongst the favourite breakfasts of my kids too. So I've learnt how to make bacon and sausages (it turns out that boudin noir is excellent with a fry-up and I even prefer it to black pudding). 

Sausages are fairly labour-intensive, but after about an hour or so's work it's possible to make one or two kilos worth. Back bacon, on the other hand, takes three weeks to make one batch, and I thought I'd share the process here.

1. Cut of meat

The first thing is to get the right piece of pork, since animals are divided up differently in different countries. I've made bacon several times now and keep forgetting which instructions I previously gave to the butcher at my local Grand Frais ... Now I have settled on asking for a carré de porc, and when they (nearly always) tell me that they don't have that in I ask for côtes de porc première in one whole piece, and try to get them to give me a couple of kilos. As you can find on wikipedia, I need the same piece of meat used to make pork chops. I then ask them to remove the spine, but it should still have the ribs. So I start with this:



2. Cure

Next the meat has to be cured for 10 days (I essentially follow the River Cottage recipe). I mix up a 50-50 batch of PDV salt and brown sugar (1 kg in total here), and add some pepper, juniper berries and bay leaves:


Notice that this doesn't include any nitrites or nitrates. I have found that nitrates/nitrites are essential for the flavour in sausages, but in bacon the only thing that they will do (other than be a carcinogen) as far as I can tell is make the meat stay pink when you cook it. I can live without that. This cure makes delicious bacon as far as I'm concerned. 

The curing process involves applying 1/10th of the mixture each day for ten days and draining off the liquid produced at each step. After the first coating it looks like this:


The salt and sugar remove water from the meat, and penetrate into it, preserving it. Each day I get liquid at the bottom, which I drain off and apply the next cure. After one day it looks like this:


This time I still had liquid after 10 days:

3. Drying

After ten days, I wash/wipe off the cure and pat it down with some vinegar. If you leave cure on the meat it will be much too salty (and, to be honest, this cure always gives quite salty bacon). So at this point it looks like this:


I then cover the container with a muslin that has been doused with a bit more vinegar, and leave in the fridge (at first) and then in the garage (since it's nice and cold this time of year) for ten days or so. This part removes extra moisture. It's possible that there will be small amounts of white mould that appear during this stage, but these are totally benign: you only have to worry if it starts to smell or you get blue/black mould, but this never happened to me so far.

4. Smoking

After the curing/drying, the bacon is ready to eat and should in principle keep almost indefinitely. However, I prefer smoked bacon, so I cold smoke it. This involves sticking it in a smoker (essentially just a box where you can suspend the meat above some smouldering sawdust) for several hours:


 










The sawdust is beech wood and slowly burns round in the little spiral device you can see above. Of course, I close the smoker up and usually put it in the shed to protect against the elements:


5. All done!

And then that's it! Delicious back bacon that really doesn't take very long to eat:


As I mentioned above, it's usually still a bit salty, so when I slice it to cook I put the pieces in water for a few minutes before grilling/frying:

Here you see that the colour is just like frying pork chops ... but the flavour is exactly right!









Monday, 5 February 2024

Rencontres de Physique des Particules 2024

Just over a week ago the annual meeting of theoretical particle physicists (RPP 2024) was held at Jussieu, the campus of Sorbonne University where I work. I wrote about the 2020 edition (held just outside Paris) here; in keeping with tradition, this year's version also contained similar political sessions with the heads of the CNRS' relevant physics institutes and members of CNRS committees, although they were perhaps less spicy (despite rumours of big changes in the air). 

One of the roles of these meetings is as a shop window for young researchers looking to be hired in France, and a great way to demonstrate that they are interested and have a connection to the system. Of course, this isn't and shouldn't be obligatory by any means; I wasn't really aware of this prior to entering the CNRS though I had many connections to the country. But that sort of thing seems especially important after the problems described by 4gravitons recently, and his post about getting a permanent job in France -- being able to settle in a country is non-trivial, it's a big worry for both future employers and often not enough for candidates fighting tooth and nail for the few jobs there are. There was another recent case of someone getting a (CNRS) job -- to come to my lab, even -- who much more quickly decided to leave the entire field for personal reasons. Both these stories saddened me. I can understand -- there is the well-known Paris syndrome for one thing -- and the current political anxiety about immigration and the government's response to the rise of the far right (across the world), coupled with Brexit, is clearly leading to things getting harder for many. These stories are especially worrying because we expect to be recruiting for university positions in my lab this year.

I was obviously very lucky and my experience was vastly different; I love both the job and the place, and I'm proud to be a naturalised citizen. Permanent jobs in the CNRS are amazing, especially in terms of the time and freedom you have, and there are all sorts of connections between the groups throughout the country such as via the IRN Terascale or GdR Intensity Frontier; or IRN Quantum Fields and Strings and French Strings meetings for more formal topics. I'd recommend anyone thinking about working here to check out these meetings and the communities built around them, as well as taking the opportunity to find out about life here. For those moving with family, France also offers a lot of support (healthcare, childcare, very generous holidays, etc) once you have got into the system.

The other thing to add that was emphasised in the political sessions at the RPP (reinforcing the message that we're hearing a lot) is that the CNRS is very keen to encourage people from under-represented groups to apply and be hired. One of the ways they see to help this is to put pressure on the committees to hire researchers (even) earlier after their PhD, in order to reduce the length of the leaky pipeline.

Back to physics

Coming back to the RPP, this year was particularly well attended and had an excellent program of reviews of hot topics, invited and contributed talks, put together very carefully by my colleagues. It was particularly poignant for me because two former students in my lab who I worked with a lot, one who recently got a permanent job, were talking; and in addition both a former student of mine and his current PhD student were giving talks: this made me feel old. (All these talks were fascinating, of course!) 

One review that stood out as relevant for this blog was Bogdan Malaescu's review of progress in understanding the problem with muon g-2. As I discussed here, there is currently a lot of confusion in what the Standard Model prediction should be for that quantity. This is obviously very concerning for the experiments measuring muon g-2, who in a paper last year reduced their uncertainty by a factor of 2 to $$a_\mu (\mathrm{exp}) = 116 592 059(22)\times 10^{−11}. $$

The Lattice calculation (which has been confirmed now by several groups) disagrees with the prediction using the data-driven R-ratio method however, and there is a race on to understand why. New data from the CMD-3 experiment seems to agree with the lattice result, combining all global data on measurements of \(e^+ e^- \rightarrow \pi^+ \pi^- \) still gives a discrepancy of more than \(5\sigma\). There is clearly a significant disagreement within the data samples used (indeed, CMD-3 significantly disagrees with their own previous measurement, CMD-2). The confusion is summarised by this plot:

As can be seen, the finger of blame is often pointed at the KLOE data; excluding it but including the others in the plot gives agreement with the lattice result and a significance of non-zero \(\Delta a_\mu\) compared to experiment of \(2.8\sigma\) (or for just the dispersive method without the lattice data \( \Delta a_\mu \equiv a_\mu^{\rm SM} - a_\mu^{\rm exp} = −123 \pm 33 \pm 29 \pm 22 \times 10^{-11} \) , a discrepancy of \(2.5\sigma\)). In Bogdan's talk (see also his recent paper) he discusses these tensions and also the tensions between the data and the evaluation of \(a_\mu^{\rm win}\), which is the contribution coming from a narrow "window" (when the total contribution to the Hadronic Vacuum Polarisation is split into short, medium and long-distance pieces, the medium-range part should be the one most reliable for lattice calculations -- at short distances the lattice spacing may be too small, and at long ones the lattice may not be large enough). There he shows that, if we exclude the KLOE data and just include the BABAR, CMD-3 and Tau data, while the overall result agrees with the BMW lattice result, the window one disagrees by \(2.9 \sigma\) [thanks Bogdan for the correction to the original post]. It's clear that there is still a lot to be understood in the discrepancies of the data, and perhaps, with the added experimental precision on muon g-2, there is even still a hint of new physics ...

Monday, 26 September 2022

Pointless particle theorists?

The latest (twitter) storm in a teacup on this subject nearly prompted me to return to my soapbox, but now my favoured news source (not the Onion but the Grauniad) has an arcicle attacking particle theorists. Yet again, the usual suspect with books to sell and a career to save is dunking on particle physics, the supposedly finished branch of science that should give up and go home now that we found the Higgs boson because we can't prove there's a new particle just round the corner and isn't the Standard Model great anyway? The problem is, of course, that maverick (Top Gun Maverick!) outsiders revealing the "truth" that elites in an ivory tower are spending oodles of public money on failed ideas, and that we need a whistleblower to expose the fraud, gains such traction with the taxpaying public.

I even see other physicists defending this public anti-vax-style conspiracy-theory propagation as good for the field to rattle peoples cages and get people to question their assumptions. The sociology of this is quite interesting, because there are people within the field who either work on niche theories or just want to take down adjacent fields, and would like to see the popular paradigms brought down a peg or two, presumably naively believing that this will lead to more resources (human, cash or just attention) to be sent in their direction. But of course public disparaging of scientists can only ever lead to a general reduction of public trust and a shrinking of the pie for everyone. There exist so many internal mechanisms for theorists to (re)consider what they are working on, e.g.:

  • Grant proposals. The good thing about writing them is that they give you an opportunity to think deeply about what you really want to work on. Boring or uninnovative things just don't get funded. Of course, the evaluation systems may be terrible and biased and reward things not being too far from the reviewer's/panel's interests ... there is much room for improvement; but at least the writing part can be useful.
  • Researcher evaluations. At least here in France we must make a declaration of our activities and plans every year, and write a longer report every few years. This serves a similar purpose to the above.
  • New hires/promotions. Groups want to hire people who are working on interesting stuff. Hiring someone permanently is an endorsement of a field.
  • Citations, talk invitations etc. While people citing your work may just be because they are working on something similar, and people like to invite their friends for talks, sufficiently interesting or new work will persuade people to follow it up and garner attention.
  • These are all group mechanisms whereby scientists evaluate each other and what they are doing themselves. I am sure someone has studied the game theory of it; indeed as individual researchers trying to succeed in our careers we all have to adopt strategies to "win" and it is a shockingly competitive system at every stage. Of course, promoting a new idea can be difficult -- we are in a constant battle for attention (maybe writing a blog is a good strategy?) -- but if there is something really promising people will not ignore it. Ambulance chasing (where tens or hundreds of papers follow a new result) is a sign that plenty of people are ready to do exactly that. If a maverick outsider really had a great idea there would not be a shortage of people willing to follow. To take an example, if "the foundations of physics" really offered opportunities for rapid important progress, people would vote with their feet. I see examples of this all the time with people trying out Quantum Computing, Machine Learning, etc.

    I'll let you in on a secret, therefore: the target of the bile is a straw man. I don't know anyone hired as a BSM model builder in recent years. People became famous for it in the 90s/early 00s because there was no big experiment running and the field was dreaming big. Now we have the LHC and that has focussed imaginations much more. People now hired as phenomenologists may also do some ambulance chasing on the side, but it is not their bread and butter. Inventing models in theory is usually a difficult and imaginative task, aimed at connecting often disparate ideas, but it's not the only task of a phenomenologist: the much bigger ones are understanding existing ones, and trying to connect theory to experiments!


    In defence of ambulance chasing (retch)

    When an experiment announces something unexpected (as happens quite frequently!) what is the correct response? According to our outsider, presumably we should just wait for it to go away and for the Standard Model to be reaffirmed. People in the field instead take the view that we could be curious and try to explain it; the best ideas come with new features or explain more than one anomaly. What should we do with wrong explanations? Should we be punished for not coming up with proven theories? Do we need external policing of our curiosity? What does ambulance chasing really cost? The attraction for many departments to form a theory group is that they are cheap -- theorists don't need expensive experiments or engineers/technicians/people to wash the test tubes. The reward for coming up with a failed theory is usually nothing; but it costs almost nothing too. So why the bitterness? Of course, we can begrudge people becoming famous for coming up with fanciful science fictions -- the mechanisms for identifying promising ideas are far from perfect -- but usually they have come up with something with at least some degree of novelty.

    When looking at CVs, it's very easy to spot and discount 'ambulance citations.' By the way, another phenomenon is to sign 'community papers' where tens or hundreds of authors group-source a white paper on a popular topic; and a third is to write a review of a hot subject. Both of these work very well to generate citations. Should we stop doing them too? In the end, the papers that count are ones with an interesting result or idea and there is no sure mechanism to writing them. In the aftermath of every ambulance-chasing cycle there are almost always papers that have some interesting nugget of an idea in, something that remains that would not have been suggested otherwise or computatations done that would otherwise not have been thought of, and hopefully brings us closer to discoveries.

    Recent progress

    We have an amazing collider experiment -- the LHC -- which will run for another ten years or so at high luminosities. We can either take the view in advance that it will tell us nothing about the energy frontier, or we can try to make the most of it. The fundamental problems with our understanding of physics have not been solved; I wrote a response to a similar article in 2020 and I stand by my opinion of the state of the field, and you can look there for my laundry list of problems that we are trying to make sense of. What has changed since then? Here are just a few things, biased by my own interests:

    Muon g-2

    The measurement of the muon g-2 by Fermilab confirmed the earlier anomalous measurement. Instead, now we have the problem that a series of lattice QCD groups have a calculation that would imply that the Standard Model prediction is closer to the measurement, in contradiction with the R-ratio method. Someone has underestimated their uncertainties, but we don't know who! This is a problem for theorists working with the experiments; perhaps the new experiment "mu on E" will help resolve it?

    CDF measurement of the W mass

    As reported everywhere, the CDF experiment at the Tevatron (the previous energy frontier collider that shut down ten years ago) analysed its data and found a measurement of the mass of the W boson with an enormous disagreement with the Standard Model of 7 standard deviations. If confirmed, it would signal new physics around the TeV scale. Since the W boson mass is just about the most generic thing that can be modified by new particles near the electroweak scale, there are any number of new theories that can explain it (as the arXiv this year will attest). Here there is a 4 standard deviation tension with a measurement at the LHC which has a much larger uncertainty. Another LHC measurement is now needed to settle the issue, but this may take a long time as it is a difficult measurement to make at the LHC. (Maybe we should just not bother?). Other than lots of fanciful (and dull) model building, this has recentred theory efforts on how to extract information from the W boson mass in new theories, which is a problem of precision calculations and hugely interesting ...

    The Xenon-1T anomaly disappeared

    Xenon released new results this summer showing that the anomaly at low recoils they found had disappeared with more data. While this immediately killed many theories to explain it, the lasting effects are that people have given serious thought to low-mass dark matter models that could have explained, and come up with new ways to search for them. Without looking, we don't know if they are there!

    An anomaly in extragalactic background light was found

    A four standard deviation anomaly was reported in the extra-galactic background light (EBL), i.e. there is too much light coming from outside galaxies in every direction! This would naturally be explained by an axion-like particle decaying -- indeed, measurements of the EBL have long been used as constraints. (Maybe we should never have come up with axions?)

    The LHC reported three other anomalies

    In analysing the data of run 2, three different searches reported anomalies of three standard deviations. Explanations for them have been suggested; perhaps we should see if they are correlations with other searches, or new ways of corroborating the possible signals? Or just not bother?

    Run 3 has started

    Run 3 of the LHC has started with a slightly higher energy and then stopped due to technical problems. It will be some time before significant luminosity is collected and our experimentalists are looking at new types of searches that might lead to discoveries. Their main motivation is that new signatures equals new reach. Our colleagues certaintly need justification or interpretations for their results, but whether the models really offer explanations of other types of new physics (e.g. dark matter) is of course a concern it is not the main one. The reason to do an experiment at the LHC is curiosity based -- experimentalists are not children looking for theorists' latest whims. The point is that we should test the theories every which way we can because we don't know what we will find. A good analogy might be zoologists looking for new species might want to got o a previously unexplored region of the earth, or they might find a new way of looking at ones they've been to before, e.g. by turning over rocks that they would otherwise have stepped over.

    Long Lived Particles

    One of these classes is long lived particles (LLPs)-- that I have written about before on here -- and they have also caught the imagination of theorists. In fact, I'm working with experimentalists with the aim of making their searches more widely applicable.

    SMEFT

    Two years ago I wrote that I thought the field was less inclined to follow hot topics and that this is healthy. This is still the case. However, some hot topics do exist and one of these is the theory of the Standard Model Effective Field Theory. There is now rapid development of all manner of aspects, from phenomenology to exploration of the higher-order version to matching etc.

    Machine Learning

    Another example is machine learning, which is becoming more prevalent and interesting, especially its interface between theory and experiments.

    Of course, there are many more developments and I'm sure many I'm not aware of. Obviously this is a sign of a field in big trouble!

    Sunday, 10 January 2021

    Recasting a spell

    For three successive Januaries now, since I started this blog in 2018, I posted a list of the things to look forward to, which for whatever reason didn't materialise and so were essentially repeated the next year. Given the state of the world right now some positive thinking seems to be needed more than ever, but it would be a bit of a joke to repeat the same mistake again. In particular, the measurement of the muon anomalous magnetic moment (which is apparently all I blog about) has still not been announced, and I'm led to wonder whether last year's controversies regarding the lattice QCD calculations have played a role in this, muddying the water.

    Instead today I want to write a little about an effort that I have joined in the last couple of years, and really started to take seriously last year: recasting LHC searches. The LHC has gathered a huge amount of data and both main experiments (CMS and ATLAS) have published O(1000) papers. Many of these are studying Standard Model (SM) processes, but there are a large number with dedicated searches for New Physics models. Some of them contain deviations from the predictions of the Standard Model, although at present there is no clear and sufficiently significant deviation yet -- with the obvious exception of LHCb and the B-meson anomalies. Instead, we can use the data to constrain potential new theories.

    The problem is that we can't expect the experiments to cover even a significant fraction of the cases of interest to us. For an important example, the simplest supersymmetric models have four 'neutalinos' (neutral fermions), two 'charginos' (charged fermions), plus scalar squarks and sleptons -- and two heavy neutral Higgs particles plus a charged Higgs; it is clearly impossible to list in a few papers limits on every possible combination of masses and couplings for these. So the experiments do their best: they take either the best-motivated or easiest to search for cases and try to give results that are as general as possible. But even then, supersymmetric models are just one example and it is possible that a given search channel (e.g. looking for pair production of heavy particles that then decay to jets plus some invisible particles as missing energy) could apply to many models, and it is impossible in a given paper to provide all possible interpretations.

    This is where recasting comes in. The idea is to write a code that can simulate the response of the relevant LHC detector and the cuts used by the analysis and described in the paper. Then any theorist can simulate signal events for their model (using now well-established tools) and analyse them with this code, hopefully providing a reasonably accurate approximation of what the experiment would have seen. They can then determine whether the model (or really the particular choice of masses and coupling for that model) is allowed or ruled out by the particular search, without having to ask the experiments to do a dedicated analysis.

    Ideally, recasting would be possible for every analysis -- not just searches for particular models, but also Standard Model searches (one example which I have been involved in is recasting the search for four top-quarks which was designed to observe the Standard Model process and measure its cross-section, but we could then use this to constrain new heavy particles that decay to pairs of tops and are produced in pairs). However this is a lot of work, because theorists do not have access to the simulation software used for the experiments' own projections (and it would probably be too computationally intensive to be really useful anyway) so the experiments cannot just hand over their simulation code. Instead there is then a lot of work to make approximations, which is why there is really some physics involved and it is not just a mechanical exercise. Sometimes the experiments provide pseudocode which include an implementation of the cuts made in the analysis, which helps understanding the paper (where there is sometimes some ambiguity) and often they provide supplementary material, but in general getting a good recast is a lot of work.

    In recent years there has been a lot of effort by both experimentalists and theorists to make recasting easier and to meet in the middle. There are lots of workshops and common meetings, and lots of initiatives such as making raw data open access. On the theory side, while there are many people who write their own bespoke codes to study some model with some search, there are now several frameworks for grouping together recasts of many different analyses. The ones with which I am most familiar are CheckMATE, Rivet, ColliderBIT and MadAnalysis. These can all be used to check a given model against a (non-identical) list of analyses, but work in somewhat different ways -- at their core though they all involve simulating signals and detectors. There is therefore a little friendly competition and very useful cross-checking. Then there are other useful tools, in particular SModelS, which instead cleverly compares whatever model you give it to a list of simplified model results given by the LHC experiments, and uses those to come up with a much faster limit on the model (of course this loses in generality and can fall prey to the assumptions and whims of the particular simplified models results that are available, but is nonetheless very useful).

    So the reason for the post today is the paper I was just involved in. It is a proceedings of a workshop where a group of people got together to recast a bunch of the latest LHC searches in the MadAnalysis framework. I didn't take part in the workshop, but did recast a search which was useful for a paper last year (if you are interested, it involves the signal in the inset picture)
    so there are now 12 new reinterpretations of some of the latest LHC searches. This should be useful for anyone comparing their models to data. You can see by the number of authors involved how labour-intensive this is -- or check out the author list on last year's white paper; there are still many more searches from Run 2 of the LHC that are yet to be recast, so we still have our work cut out for some time to come before there is any new data!

    If you are interested in the latest developments, there will be a forum next month.

    Sunday, 4 October 2020

    Zoom on the Universe

    Like everyone else in the field I've been enjoying the new benefits and disadvantages of Zoom meetings since the lockdown in March. They've given access to talks from all around the world, and made it even easier to carry on working on my own stuff while ignoring someone else talking in the background (it's amazing how motivating being in a talk can be).

    However, as part of the Fête de la Science, this week some eminent members of the particles and strings groups at my lab are getting together to do an Ask Me Anything for anyone - adults or chidren - interested to ask questions about the deep questions in the universe. They'll be on Monday/Wednesday/Friday at 17h30 CET. The first two will be in French, and the last one in English. I'll be taking part in the one on Friday, mainly putting on my string theory cap. We had a rehearsal last week which was a lot of fun, I'm hoping the real thing will be even better.

    Please circulate the link here and visit us to ask anything you ever wanted to know about string theory, cosmology, dark matter, physics Beyond the Standard Model, ... from researchers working at the cutting edge!