Stratégie
Etude 135 - 05/2026

How do wars end ? "Securitisation" and the problem of victory and defeat

Sir Hew Strachan | 1h31min de lecture
Télécharger la versoin pdf

It is much easier to start a war than to end one. That truism reflects the fact that war termination normally proves to be a protracted process, subject to twists and turns, and sometimes leading in the short term to renewed escalation and further violence. Nonetheless, ultimately all wars end – even if they can then restart years or decades later. These complexities may explain why the literature on war termination is comparatively under-developed compared with that on the causes of armed conflict. Today Europe faces three wars which it would like to see ended: that in Ukraine following the Russian invasion in February 2022, that in Gaza following the Hamas attack of 7 October 2023 and that in Iran following the attack by the US and Israel in 2026. As neither Britain nor EU member states are themselves belligerents, they are not the final arbiters in either conflict but, even if they were, it is not clear that they have the competence or understanding to do so with success. This study examines the subject’s difficulties the better to develop its modalities.

Introduction

It is much easier to start a war than to end one. That truism reflects the fact that war termination normally proves to be a protracted process, subject to twists and turns, and sometimes leading in the short term to renewed escalation and further violence. Nonetheless, ultimately all wars end – even if they can then restart years or decades later. These complexities may explain why the literature on war termination is comparatively under-developed compared with that on the causes of armed conflict. This study explores some of the subject’s difficulties.

In 2026 Europe faces three wars which it would like to see ended: that in Ukraine following the Russian invasion in February 2022, that in Gaza following the Hamas attack of 7 October 2023, and that in Iran following the attacks by the US and Israel. As neither Britain nor any EU member states are themselves belligerents, they do not have the power to terminate either conflict but, even if they were, it is not clear that they have the competence or understanding to do so with success. Donald Trump claimed over 50 times in the run-up to his re-election as president that he could end the war in Ukraine within 24 hours. Most observers were not convinced, and so it has proved. The US’s efforts to broker a peace between March and May 2025 served only to reveal how irreconcilable were the ambitions of Russia and Ukraine. They could not even agree on modalities: Russia wanted to move straight to a peace settlement on its terms; Ukraine wanted a cease fire first, and then negotiations.

Since the 9/11 attacks in 2001, the tendency to blur the distinction between war and peace has compounded the difficulty in passing from the first to the second. The growth of national security strategies and the development within political science of security studies have placed war on a continuum that talks of persistent threats from both states and non-state actors. The practice of national security and the theories surrounding security fail to separate the use of armed force from other forms of competition. The fact that war involves violence, that its currency is that of humans killing each other, puts it in a different category from other threats to humanity. ‘Securitisation’ conflates threats from hostile human agents with dangers – like climate change and its consequences – that are the product of very different phenomena.

If we wish to enable a transition from war to peace, we need not to conflate them but to have a clear sense of what separates them. Without that awareness, we shall not understand how to enable the passage from war to peace and to factor in the contingencies which affect the process.

Many of the difficulties in making peace arise directly from the fact that it is an activity which occurs while war is ongoing. It can therefore be associated not with the moral virtues of peace itself but with the more effective pursuit of armed conflict. Peace may be the end state to which peace-making points but the context in which the negotiations are conducted is that of war. Both sides can still be engaged in using force in order to gain a negotiating advantage. One can escalate the violence to bring the other back to the table. In a coalition war, an alliance may encourage one of the opposing powers to make a separate peace. However, in this case the immediate aim will not be peace itself but the division of the enemy in the pursuit of a more complete final victory. By the same token, an ally can threaten to negotiate a separate peace with the enemy. In both world wars, the fear that Russia would do so with Germany encouraged its allies – Britan and France in 1914 and the United States and Britain in 1943 – to take measures to bind Russia more closely to the alliance in order to forestall such an eventuality. In September 1914 the commitment of the three allies not to make a separate peace expanded their war aims, with Britain even accepting Russia’s right to take Istanbul and to control the straits in the event of an Entente victory in March 1915. In January 1943, Franklin Delano Roosevelt and Winston Churchill embraced the policy of unconditional surrender at their conference in Casablanca in order to placate Stalin who was demanding that they open a second front in Europe that year. The pursuit of peace through victory required an intensification of each war’s scale and tempo, not their moderation.

Peace-making is therefore a process hedged about by conditionality and contingency. Nor are these factors solely concerns for the direct relations between opposing belligerents. In On War, Carl von Clausewitz described war as a trinity made up of passion, the play of probability and chance, and reason, and he associated each of these attributes with particular groups of domestic actors within the state or nation: passion with the people, the play of probability and chance with the armed forces, and reason with the government. These pairings are not fixed but the important point here, when it comes to the move from war to peace, is that each group of internal actors may feel ready to do so at different times and for different reasons. In Clausewitz’s own day states and crowned heads proved ready to treat with Napoleon after major defeats on the battlefield but from 1807 onwards their peoples, as they felt the hard hand of French occupation, did not accept the verdict of the battle but opted to fight on.

Battle no longer gives a clear result in war, as it was deemed to do in the eighteenth century. Wars since then have been sustained or rekindled by popular mobilisation, and their outcomes have been increasingly delivered by the greater economic and social exhaustion of one side than the other. Battles, when they occur, have been waypoints on this road through exhaustion to victory, but they are only rarely end points. Measuring the indicators that point towards eventual success, in wars of so-called ‘attrition’, have helped direct military action towards an eventual goal, but it is one where other elements – principally economic and social mobilisation, but also the contributions of allies – have to be incorporated into national strategies.

Nonetheless, the comparative absence of decisive battles in wars since 1815 does not mean that victory itself has become a ‘myth’. From the tactical level upwards, armed forces still seek to achieve objectives in war that can then contribute to the achievement of overall war aims. The offensive has the power to create a unifying sense of direction to any military action, tactical or operational. It is these imperatives, the need to give purpose and direction to the conduct of war, which explain the continuing faith in battle and which see victory as the best path to peace. During the Second World War, the policy of ‘unconditional surrender’ reflected that ambition.

The close correlation between the ways in which a war is fought and the achievement of an acceptable outcome means that war itself shapes the final peace, regardless of the original aims of the belligerents in the war – including those of the eventual victors. It also means that, if both sides continue to see war as the only route to the fulfilment of their policies, then they will fight on. All wars, even those leading to unconditional surrender, end with negotiation. The defeated party therefore has to have reached the point where it accepts that the war will no longer give it what it had hoped for. It aspires to negotiate an outcome which will preserve more of what it wanted than by then seems possible to achieve by continuing the war. On the other side, the victors realise that they will get more of what they want through a settlement rather than through continued fighting. In these circumstances, the leverage of third-party actors – like that which the United States sought to exercise in 2025 – is more limited than the apparent moral virtues of peace might suggest. Peace in the midst of war is not some abstract good but still, like war itself, a means to an end.

Securitisation and its perils

On 24 February 2022, major war returned to Europe when Russia invaded Ukraine. The attack should not have come as a surprise. Eight years before Russia had annexed Crimea and since 2014 low-level fighting had persisted in Donbas despite the claim that the war had been ‘frozen’ by the Minsk accords. Brokered by France and Germany, they sought to de-escalate the conflict, appease Russia and prioritise peace. Having enjoyed increasing stability since 1945, Angela Merkel and François Hollande were determined not to jeopardise their own countries’ security and pinned their hopes for peace on an agreement that failed to satisfy either of the belligerents. It divided Ukrainians and left Russia merely marking time.(x) Such responses – risk-averse and prioritising stability over realism – were hard-wired in Brussels. ‘Europe has never been so prosperous, so secure nor so free.’ The opening statement of A secure Europe in a better world, the European security strategy adopted by the European Community in December 2003, was not untrue. But the paradox in what followed was that it addressed insecurity, not security. Within four short paragraphs, A secure Europe in a better world had moved on to assert that ‘over the last decade, no region of the world [including Europe] has been untouched by armed conflict’.(x) Europe was not alone in addressing a positive as a negative. National security strategies did the same thing. In 2008 Britain’s first national security strategy announced that Britain had rarely, if ever, been as secure as it had now become. It then proceeded to address manifold sources of global insecurity, as though all of them, from climate change to terrorism, were equally dangerous to the United Kingdom.(x)

The proliferation of security strategies, both as individual national policies and as multilateral objectives in a collective international order, was reflected academically in the growth of security studies. Like the former, the latter dealt with global threats on the same terms as local ones, and natural disasters as though they are comparable with military confrontations. The result of this conflation was not dissimilar from the reversal of security into insecurity: we had become confused as to what we were talking were about by creating empty dichotomies. We allegedly lived in a world which was characterised by neither war nor peace (despite our increasing security), and when we did fight we acknowledged neither triumph nor defeat – because we said our aim was to achieve not victory but greater security. We craved something that was relative, security, but treated it as though it were an absolute and therefore perfectible. As a result, all we seemed to feed was its corollary, which was a greater awareness of our insecurity.(x)

The very fact that the 2003 European security strategy talked (as does international law) about ‘armed conflict’, rather than war, makes the same point. War is an absolute (even if there are gradations within it): we know it when we see it, and it does not admit of equivocation. However, we are uncertain about war’s status in part because states no longer declare war. In 1921, J. A. Hall, a lawyer looking back on the outbreak of the First World War, observed that, if ‘a State chooses to seek its ends by war, international law can have nothing to do with the origins or purpose of that war or the rights and wrongs of the parties. It has no power to punish the wrong-doer or even to inquire who is the wrong-doer.’(x) Seven years later the Kellogg-Briand pact endeavoured to address this legal deficit by prohibiting the use of war as an instrument of national policy. The pact failed to prevent war, not just in 1939, but almost from the moment of its signature. However, that did not deter those anxious to legislate against its use. In 1945 the United Nations Charter appropriated to the UN Security Council the right to use force to maintain or restore international peace and security. The only exception, contained in article 51 of the charter, is the sovereign state’s right to self-defence in the case of imminent attack. This is the provision that has underpinned the legitimacy of Ukraine’s response in February 2022 just as surely as Russia’s action breached the UN Charter.

States today are therefore reluctant to talk about going to war for very obvious reasons: in most cases international law does not allow them to do so. International humanitarian law and the laws of armed conflict since 1945 have rested on a paradox: they have legislated for the conduct of activity which is itself illegal in most sets of circumstances. Putin insists Russia is engaged in a ‘special military operation’, not in a war, even if what is going on in Ukraine is de facto demonstrably a war. Both sides are taking prisoners of war and – even if they cannot agree on much else – are ready to exchange them, so in this respect at least manifesting their observance of the laws of war.

But the corollary of the ambiguity which arises from this divergence between declaratory principle and effective practice is that, just as we are no longer so certain what war is, so we are less sure what peace is. Peace too is out of fashion. The loosening of one definition has made us less sure about the other. Peace is treated as utopian and idealistic, not least because the concept of security obscures its distinction from war. The literature on war’s causation is more abundant than that on war’s termination; a similar point can be made about the literature on war as opposed to that on peace. The former is much more abundant than the latter, and those who work on peace not unreasonably opine that those who work on war can only define it in negative terms – as the absence of peace.(x) For many people in many parts of the world peace is a more real experience than war, and that can be true for long periods of time. It was true for much of Europe between 1815 and 1914 and again between 1945 and 2022.

Security studies are in danger of missing a core reality: that in practice each of us, and especially all of us who have the privilege to live in the western world, have some grasp of what peace is. It is based on our experience of daily life, which reflects the fact that we do indeed live in conditions of extraordinary security, whether that is understood as individual human security or as collective national security. The societies of European states, as the European security strategy recognised, are societies at peace. They know that not least because until 1945 their collective experience was so different. In the first half of the twentieth century the frontiers of European states were in some cases redefined up to three times; so too were their political complexions; and two general European wars, which became world wars, inflicted a combined total of well over 70 million deaths– and in some global estimates much more than that. These experiences are still within living memory. By contrast, in 2012, Martin Kettle, reviewing the Cameron government’s efforts to introduce secret courts to the United Kingdom and to enhance the state’s powers of domestic surveillance, acknowledged that Britain was ‘indisputably not at war with anyone. We are not at war with any nation state, not at war with Islam, not at war with terrorism, and not at war with the people in our midst who are, be in no doubt about it, plotting to kill us’. And then he revealed the paradox: ‘But it sometimes looks that way.’(x)

The distinction between peace and war

This point, that we talk about security in such a way that we feed our insecurity, does not mean that there are not genuine ambiguities in the relationship between peace and war. Peaceful conditions can prevail within war, at certain times and in some places. But the most profound challenge to our understanding of the distinction between war and peace is Thomas Hobbes’s insight, that force is required to maintain the order that we call peace. He believed that life for humans in a state of nature was brutal and short. By granting the state the monopoly of violence, societies prepared themselves better for self-defence against external threats, while creating the domestic order that we call peace. The notion that society, whether domestic or international, is fundamentally anarchic means that the state’s possession of the monopoly of armed force is what creates order. Embedded within the understanding of domestic peace is the presumption that the state uses its monopoly of force wisely, and that sovereign authority is exercised by the state in the interests of the nation and community as a whole. Peace without justice is, on this reading, no peace. As John Stuart Mill put it, in response to the American Civil War, ‘War is an ugly thing but not the ugliest. The decayed and degraded state of moral and patriotic feeling which thinks nothing is worth a war is worse. A man who has nothing which he cares about more than his personal safety is a miserable creature and has no chance of being free, unless made and kept so by exertion of better men than himself.’(x)

Mill’s statement is not just an injunction to fight against domestic tyranny, a legitimation of civil war, but also a justification of international war. It anticipated Franklin Delano Roosevelt’s state of the union address which he delivered to Congress as president of the United States on 6 January 1941, in which he presented four freedoms as universal human values – freedom of speech, freedom from want, freedom from fear and freedom of faith. When the United States entered the Second World War at the end of that year, these values were translated into American war aims. They became the reasons why war was necessary. In 1943, the United States journalist and intellectual, Walter Lippmann, declared that ‘the survival of the nation in its independence and security is a greater end than peace’. As he explained: ‘If the logic of peace as the supreme national ideal leads to absurdity, then it must be a grave error to think and to say that peace is the supreme end’.(x)

Lippmann came from a democratic state. But his statements echo the response of Carl von Clausewitz in February 1812 when he rejected the French demand that Prussia contribute troops for Napoleon’s invasion of Russia. Prussia was ruled by an absolute monarch, even if its king, Friedrich Wilhelm III, was not noted for his decisive leadership. For Prussia, and particularly for Clausewitz and his immediate associates, France, not Russia, was Prussia’s enemy. So Prussia had to fight France rather than submit to Napoleon’s demand. As Clausewitz wrote, ‘I believe and confess that a people can value nothing more highly than the dignity and liberty of its existence; that it must defend these to the last drop of its blood; that there is no higher duty to fulfil, no higher law to obey’. And he concluded: ‘I believe that a people courageously struggling for liberty is invincible; that even the destruction of liberty after a bloody and honorable struggle assures the people’s rebirth. It is the seed of life, which one day will bring forth a new, securely rooted tree.’(x)

The moral challenge set by these words is reflected in their explicit use of Christian analogies. The cadence and the vocabulary are those of the Apostles’ creed. The imperative to fight, even in vain, may be greater than the imperative not to fight. As James Turner Johnson has repeatedly but controversially argued, the Christian just war tradition can too easily be seen as a norm predisposed to favour not fighting. The just war tradition, he claims, also creates an expectation that, if the cause is just, as Clausewitz believed Prussia’s to be in 1812 or Lippmann believed that of the United States to be in 1943, then we are obliged to fight.(x)

Both Clausewitz’s and Lippmann’s calls to arms confirm that our understanding of what peace is may require us to use war to sustain it. But they also constitute a moral challenge because they contain arguments that themselves can be used for ends which are unequivocally immoral. Clausewitz’s ringing affirmation in 1812 was quoted with approbation by Adolf Hitler in Mein Kampf: indeed it was the only quotation from Clausewitz used by Hitler, who managed to resist the usual temptation to cite selectively from On War, but who in 1923 saw how this statement captured Germany’s mood after the defeat of 1918. Just over twenty years later, on 18 March 1945, Hitler said to Albert Speer, ‘If the war is to be lost, the nation will also perish. This fate is inevitable. There is no need to consider the basis even of the most primitive existence any longer. On the contrary, it is better to destroy even that, and to destroy it ourselves.’(x) In contemporary western societies, the dilemmas of the relationship between war and peace are rarely, if ever, put in terms such as these, or even those favoured by Lippmann in 1943. But they have not been absent from the world of the 21st century: the rhetoric used by Hitler finds its comparator in the calls to arms of Islamic jihadi websites.(x)

Those who live in western societies luxuriate in societies that have grown so persistently peaceful that they take that peace for granted, but that does not mean that there are not citizens of those same societies who are serving, killing and possibly dying in wars. Most citizens of most NATO countries rightly feel themselves to be at peace; most service personnel of the NATO armed forces deployed to Afghanistan felt themselves – equally rightly – to be at war. The member states of NATO are under pressure to soften their conceptual awareness of the distinction between war and peace precisely because of the gulf between the daily experiences of those who live in western societies and of those who defend them. That softening of the difference, and our uncertainty as to which is the prevailing condition of our times, war or peace, are the means to enable those of us who live relatively secure lives to appreciate the efforts of those who do not. They help us to understand that our security rests on a concept of international order and a readiness to fight so as to defend and maintain it.

War is different from other threats

Recent military doctrine no longer treats war as distinct and different, the polar opposite of peace. French military doctrine used to make a clear distinction between peace and war. The steps from one to another were the product of a crisis – a moment when the differences and tensions might go in either direction, towards war or back to peace.(x) In 2021, the chief of the defence staff’s Vision stratégique replaced this sequence, peace-crisis- war, which culminates in dramatic change, with a continuum – competition, dispute and confrontation – in which peace has no place. Ironically, nor does war – or not explicitly so. ‘These three notions … are tightly linked. Two stakeholders might thus find themselves competing in one domain and disputing in another.’(x)

A similar evolution can be seen in British doctrine. The third edition of British Defence Doctrine was published in August 2008, while the United Kingdom’s armed forces were fighting simultaneously in Iraq and Afghanistan. It said that ‘peace and war cannot always be distinguished absolutely’. Its reasons were logical enough: ‘The resolution of complex contemporary crises may involve a hybrid of conventional warfighting and irregular activity combined, as well as well as concurrent stabilization activity, all in the same theatre’. As a result, ‘Boundaries between them may be blurred.’(x) By the time that the fourth edition was in preparation, in January 2010 (in itself an indication that the pressure of events was forcing the pace of doctrine development), a new draft went further: ‘Neither peace nor war… are absolute, nor are they necessarily opposites of each other (being but different means of achieving the same end: that of policy). Instead they represent a continuum or notional spectrum.’(x)

Both the 2008 and what became the 2011 editions began not with definitions of defence, despite its declared subject matter, or even of war, but of security. They sub-divided security into human security, United Kingdom security and collective security. They therefore embodied the point that security had replaced both defence and strategy as the key word to shape the understanding of war. In the 2008 doctrine the actual application of armed force was reduced to quasi-abstractions. ‘Force may be used in an instrumental (or direct) way to influence an opponent’s capacity to act, or in an expressive (or indirect) way to influence his behaviour.’(x) War itself was subsumed in a range of threats, some of them to people as individuals, some to nations, and some to the world as a whole. Security had become a concept which subordinated war. It was lumped not just with other violent products of human agency, like crime or terrorism, but with very different phenomena which do not involve direct assaults by humans on each other, and some of which may not even be the product of human design but of naturally occurring events. War was being put on a par with flood or drought, tsunami or earthquake.

Of course, the conduct of a war can precipitate natural disaster. Armies have long been vectors for disease and their movements have spread typhus, influenza and possibly AIDS. The destruction caused by war can forestall the efforts of farmers and precipitate famine. But disease and hunger exist without war, and earthquakes and tsunamis will occur regardless of whether we do or do not go to war. In the early hours of 7 April 2012 over one hundred Pakistani soldiers were killed while guarding the Kashmir border, the focus of a long-running dispute with India. However, their deaths were the result not of Indian action but of a major avalanche hitting their battalion headquarters. The political and strategic impact of the first cause would have been immense; that of the second, the reality, was zero. What the study of security should demand of us is not a capacity to conflate threats but a readiness to understand causation, because it is here that the links could lead to war. Climate change, which is at the centre of the security debate, highlights the point, not only because the degree of human responsibility for it is contested, but also because it is unclear how threats to our supplies of food, fuel and water may affect the appetite of states or non-state actors for war.

If we in Europe put all the security challenges on a spectrum that runs from whether we die in our beds from old age to the possibility of death in a car accident or an aeroplane crash, to the least likely – death in war, we are in danger of so conflating the dangers that confront us that we may fail to distinguish between those that we can manage or even prevent, and those for which the best we can hope is that we can mitigate their consequences. Nor has ‘securitisation’ stopped there – that is with events that cause direct physical harm to humans through physical injury. It has highlighted the insecurities caused by all manner of human behaviours – resulting in psychological, social and ontological harms. Difference in terms of outcomes has become defined as disadvantage. War may play its part in these but it is not central to any of them. We are just as much in danger of ‘securitising’ every problem as our immediate predecessors were of ‘bellicising’ international relations through nuclear deterrence, or our nineteenth-century forebears of ‘militarising’ their societies through the introduction of conscription.

The tendency of ‘securitisation’ to lump threats, rather than to disaggregate them, has migrated from academic discourse to public policy. Joseph Nye’s concept of ‘soft power’ is an example.(x) Nye, a former Rhodes scholar at Oxford, moved from Harvard to public service, as chairman of the National Intelligence Council in 1993-4, and as an Assistant Secretary of Defense in the Clinton administration. A liberal, his position was genuinely cosmopolitan, resting on a recognition of the interdependence of states and of the need for transnational perspectives. ‘Soft power’, an idea which he first advanced in 1990 but really developed and promoted after the 9/11 attacks in 2001, is proposed as an alternative to the use of ‘hard’ or military power. It embraces the cultural, economic and educational activities of a state or community, and sees them as agents of influence rather than as goods in their own right. Nye has been very influential. The defence community began to treat soft and hard power as two sides of the same coin, united by a common objective, which is the exercise of national power. In this reading, military power can be either hard or soft: hard if used in fighting but soft if an agent of influence or even of emulation. Over the three decades since the end of the Cold War the armed forces of the United States have exercised both forms, through wars of intervention on the one hand and through setting trends in military thought and culture on the other. As a result, the distinction inherent between hard power, whose ultimate sanction is war, and soft power, which uses means whose ends in themselves are inherently benign and even pacific, is lost in the hands of defence doctrine and national security strategy.

Soft power co-opts peaceful activities in situations and for purposes in which states might possibly have been ready to use or at least to threaten force. Those were the grounds on which his critics attacked Nye: that he underestimated the power of war to change things, while exaggerating the influence of cultural contact to do so. Watching American television does not necessarily prevent those who dislike the United States from attacking it. European distrust of the United States as a result of its determination to seek war with Iraq or Iran has not been dissipated by the European readiness to accept American popular culture or to have close personal friendships with many citizens of the United States. In 1914, Europe shared much in terms of education, academic research and the arts; international convergence was as much in evidence as national chauvinism. But in August those products of peace were all marshalled for patriotic purposes to support war. The distinction that mattered was not the means, which then – as in the soft power/hard power debate today – remained in essence the same regardless of the purposes they served, but the ends. War and peace are very different. Moreover, and crucially, if we cannot distinguish between them, we will not know how to end war.

The tendency to conflate the two carries three particular consequences. The first consequence is a failure to recognise that war is different from other dangers to global security. War’s essential characteristic, as Clausewitz put it at the very beginning of On War, is that it is a clash of wills. The danger comes from the enemy and his determination to counter our intentions. War is a reciprocal relationship, and it is that reciprocity which gives war its own inherent dynamic. War’s deliberate use of armed force makes it different from the challenges and threats posed by the natural world or even by other aspects of human interaction, such as economic competition. The fact that the currency of war is violence done by one human being to another means that the very fact of its employment fundamentally changes the relationships between human beings.

The role of contingency in war: long wars and short wars

The second consequence of conflation, that of lumping all security threats together, creates an artificial continuity, which rules out the play of policy and, above all, of contingency. In The Shield of Achilles, published in 2002, Philip Bobbitt saw the wars of the twentieth century as one long war, running from 1914 to 1990, and then used this historical analogy in his next book, Terrorism and consent (2008), to understand the long-term challenges posed by terrorism to liberalism, or to the ‘market state’ as he called it.(x) The ‘long war’ became the successor title to the ‘global war on terror’ at least two years before Barack Obama, George W. Bush’s successor as president, formally declared the global war on terror was over in 2009. But for historians long wars are only evident in retrospect. As Bobbitt himself pointed out, the wars between England and France which spanned the years 1337 to 1453 only became known as the Hundred Years War in the nineteenth century. Similar points could be made about many long wars, the Thirty Years War, the Nine Years War and the Seven Years War, each of which aggregated a number of separate or separable conflicts, marked by armistices and sometimes fought in different theatres between different belligerents.

This point is particularly true of Bobbitt’s ‘long war’ of the twentieth century. The First World War ended in 1918, although there were ‘wars after the war’ which ran on until the finalisation of the last peace settlement, that with the newly formed Turkish republic in 1923. The peacemakers who met at Versailles in 1919, and particularly President Woodrow Wilson of the United States (another academic who had migrated to public policy), possessed a truly ambitious vision for a new world order, which they hoped would ensure the early resolution of international conflict. Their ambitions outstripped their capacity to deliver, but the outbreak of the Second World War was not the inevitable product of the peace treaties that ended the First World War, whatever some Germans might have argued at the time and whatever some Americans, anxious to attack Wilson, have now come to believe. The League of Nations, formed in 1919 to resolve international disputes before they ended in war, was holed below the waterline by three of the victors in the First World War before Germany used military force to overthrow the terms of the Versailles treaty. The United States Senate refused to ratify the treaty in 1919 and again in 1920; Japan invaded Manchuria in 1931-32 and left the League in 1933; and Italy invaded Ethiopia in 1935.(x)

Most importantly, we forget that many states had little or limited experience of war between 1914 and 1945. The United States only fought between 1917 and 1918 and again between 1941 and 1945; Switzerland did not fight at all. And whom they fought changed: Russia was an ally of Britain in 1914-17 and of Britain and the United States in 1941-45, but became their adversary between 1948 and 1989, as it is again in 2026. The notion of the long twentieth-century war depends on a reading back from the stand-off between the United States and the Soviet Union in the Cold War. It requires us to ignore the ideological distinction between fascism and communism. And, finally, it also overlooks the rather important point that the Cold War was not a hot war as the two world wars were.

Thirdly, failing to distinguish between war and peace, and so lumping wars together and lumping war with other security threats, does not just downplay contingency. Precisely because these approaches do that, they end up with little to say about causation, or at least the short-term causes of wars. In their focus on the longue durée, they become almost Marxist in seeing wars as the product of inevitable clashes. They have little to say about wars that are short, or about wars that are fought between states or groupings that look socially and politically similar. Critically they do not explain why wars that seem inevitable do not always happen. Nor do they address the issues which confront the practitioners of strategy, who deal with policy here and now, as opposed to the theorists of strategy, who are anxious to put events into a context set by the past and orientated towards the future. The questions of causation come at the intersection between the two, between present and future, between reality and theory, which is precisely why the relationship between strategy in practice and strategy in theory is not an abstract matter but one of practical and vital importance. When, where and how will competition over scarce water supplies, or the melting of the polar ice cap, or the lack of fossil fuels produce war? Or will these issues actually not lead to war but be resolved by international or bilateral arbitration? Will the realisation that they could lead to war enable diplomacy to triumph over militarisation?

The passage from war to peace

The causes of wars have kept historians of international relations busy since the development of history as an academic discipline over a century ago. But the diplomatic historians of the past, many of them great titans of the historical profession, tended to fall silent the moment the bullets started to fly, and only resumed their research once the fighting had stopped. Their legacy is still with us. We know a great deal about the passage from peace to war, not least because the issue is one of concern not only to historians but also to theorists of international relations: the latter are being prudential as they need to understand war’s causation in order to prevent it. We tend to pay less attention to the reverse process: the passage from war to peace.(x)

International relations terminology helpfully distinguishes between what it calls conflict termination and conflict resolution.(x) In other words, it recognises that ending a war is not the same as removing the causes of the war or the hostility that the conduct of war may have engendered. A war needs to be fought through to the point where a strategic decision is evident; where one side knows that militarily it is not going to win. At that point cost-benefit calculations suggest that the defeated side should enter negotiations with the other side, the potential victor. Like war itself, this phase between war and peace remains nested in a bilateral relationship. The side that is winning has to accept the potential surrender of the side that is losing, and to offer terms that make surrender a palatable option, so much so that the defeated side will accept them rather than fight on. These terms may be minimal: no more than a guarantee that the victor will spare the lives of those who surrender. They may be much more generous, as were those offered in 1866 by Otto von Bismarck to Austria after its defeat by Prussia.

The negotiations are likely to be political, because, while battle is an exchange between armies, peace negotiations – at least those involving states – are an exchange between governments. If a non-state actor is a party to the war, for example in a war of national independence, then the aim will be to create a transitional government, so as to enable the move from war to peace. The compulsion to conclude war by inter-governmental negotiation can be evident even in the most extreme circumstances, and even in those where one side has demanded ‘unconditional surrender’. In 1945 the Japanese government was given the reassurance that the emperor would not be removed, so that some element of continuity would be preserved.(x) In Germany, the allies negotiated with Karl Dönitz, who as Hitler’s appointed successor, possessed a form of political legitimacy, however shallow it might have been.

Clausewitz’s ‘trinity’ and conflict termination

The presumption within this state-driven model of war termination is that the government is the agency that surrenders. Is that right? Two other parties have an interest in the outcome of a war, at least within states, and they may behave differently from their government. The first is the armed forces. Surrender often begins with capitulation on the battlefield, with soldiers raising the white flag to show their readiness to talk or with a ship striking its colours.(x) The notion of decisive victory relied on capitulation or defeat in the field leading to a political outcome: the British surrender at Yorktown in 1781 or Napoleon’s defeat at Waterloo in 1815. But there have been massive surrenders which have led not to a political result but to a renewed commitment to fight, as the Soviet Union did in 1941 despite the loss of 3 million men between June and December.(x) Equally governments can decide to negotiate when their armed forces dispute that they have been defeated, as those of Germany did in 1918. They can fight on in exile, as states conquered by Germany in 1939-41 did or as the European members of NATO thought they might have to do in the face of a Soviet invasion in the 1950s and 1960s.

The second party with a stake in the government’s decision is the nation as a whole. The people may reject the wish of the government to negotiate. Popular resistance to Napoleonic rule in Spain or Italy flowed from the differences between princely rulers and their peoples. The former were prepared to negotiate in the hope that they would save their thrones and status; the latter were not consulted, remote from central decision-making, and more likely to feel the hard hand of French occupation.(x) In 1870 the Prussian army defeated the French army of Napoleon III in the decisive battle of Sedan. Although Napoleon accepted the verdict of the battlefield by abdicating, the war did not end. The French fought on, using forms of national resistance not unlike those that had been used against them in the Peninsular War sixty years previously.(x) By contrast, in 1917 – as war-weary peoples felt the strains of hunger, economic deprivation and personal loss – peoples sensed the power of revolution as a way out of war, while their governments resolved to renew the fight.

In democratic states the possibility of a division between government and people is diminished, but modern democracies can exaggerate the potential of that division when fighting more authoritarian regimes. Greater literacy and improved communications, through media like radio, film, television and the internet, have enhanced governmental control of popular responses. In 1944 the allies’ hope that the German people would rise against Hitler was not fulfilled. In 2003, the United States, when it invaded Iraq, did not bank on the surrender of Saddam Hussein, but did anticipate – wrongly as it turned out – a rapturous welcome from his subjects. The calculation of the United States-led coalition was that they, not their leader, would take the decision to negotiate.

These relationships in the transfer from war to peace, between governments, armed forces and people, have therefore changed over time, particularly as democratisation has meant that the people are understood to be distinct political actors in their own right, and not seen just as subjects of an absolute ruler. In the ancient and medieval worlds, and even up to the eighteenth century, the pursuit of booty and plunder in war could unite the state and the individual in the motivation for going to war. By the same token these objectives also help explain how and why wars ended. At the individual level, those who were defeated were either killed or increasingly – as their asset value was appreciated – were passed into slavery or redeemed for ransom. As Genghis Khan is said to have put it, both pithily and graphically, but also probably apocryphally, ‘The great happiness is to vanquish your enemies, to chase them before you, to rob them of their wealth, to see those dear to them bathed in tears, and to clasp to your bosom their wives and daughters.’(x)

War aims and war’s end

Possession, not negotiation, was the basis for ending war. Prolonged resistance simply produced obliteration. In 428 BC, during the Peloponnesian War, the city of Mytilene on the island of Lesbos in the Aegean Sea, decided to revolt against Athenian rule and side with Sparta. The revolt was suppressed in 427 and Athens initially decided to kill all the men, and to sell the women and children into slavery. The next day, further discussion led it to revoke its decision, and to execute one thousand leading citizens instead. The trireme despatched with the second verdict managed to reach the city just in time to prevent the implementation of that first decision. This clemency reflected a belief that even in a democracy not all were equally responsible for the acts that were carried out on their behalf. The same argument was not applied just over a decade later, in 416 BC, when another Aegean island, Melos, which had asserted its neutrality in the war and so refused to join the Athens-led Delian league, fell after a prolonged siege. Athens killed all the adult males, and sent the women and children into slavery. Such summary treatment was far from unusual in the ancient world. Notoriously, Rome ended its long-running struggle with Carthage for control of the Mediterranean in 146 BC by destroying the city, annexing all its territory, and killing or enslaving the entire population. The political rallying cry of the republic, Carthago delenda est, that Carthage should be destroyed, was thus fulfilled.

War for Athens and Rome was the means to empire, to wealth, territory, revenue and population, and once one side had achieved those objectives the war ended. However, if war was no longer fought for material gain, but for reasons of religion or ideology, its outcomes could seem less clear cut and more open-ended. Victory might leave you master of the battlefield and therefore of its surrounding territory, its peoples and their assets, but it did not ensure conformity of faith or thought. The link between tactical delivery and political outcomes became less direct, and this disjunction remained true even after 1648, when at least ostensibly Europe turned its back on wars of religion.

In the seventeenth and eighteenth centuries, control of territory and resources remained the principal political driver in war, from Louis XIV’s expansion of France to its ‘natural frontiers’ on the Rhine and the Alps, to the competition between France and Britain for control of North America and India. The wars of Spanish, Austrian and Bavarian successions all made the same point: dynastic rights conferred territorial control. One of the great figures of political philosophy in the Enlightenment, Charles, baron de Montesquieu (1689-1755), disagreed with Hobbes that only man in a state of nature was inherently given to war, a phenomenon which Montesquieu explicitly associated with society and the state. Montesquieu believed that ‘as soon as man enters into a state of society, he loses the sense of his weakness; equality ceases, and then commences the state of war’. But he could not escape the logic of war of his own day. In order to manage war, he had to define it, and his definition was still one determined by the need to possess: ‘the object of war is victory; that of victory is conquest; and that of conquest preservation’.(x)

The verdict of battle

Tactically, eighteenth-century warfare was played out less often on the open battlefield and more through the medium of siege warfare. If the point of war was to master territory, then Louis XIV’s engineer, Sébastien Le Prestre de Vauban (1633-1707), was the supreme practitioner of the day. He made his reputation in the conduct of siege warfare, but his lasting legacy, still visible in the European landscape, was his work to consolidate what France had gained through a system of fortifications designed to form robust French frontiers. He is credited with upgrading the defences of 300 cities and towns, and created 37 totally new forts. So important did siege warfare become in the eighteenth century that many of those who thought about and practised warfare counselled against the use of battle in open field. Maurice de Saxe, Marshal of France and commander of the French army in the Netherlands in the War of Austrian Succession in 1745-48, was not overly impressed by the work of Vauban, but still wrote ‘I do not favour pitched battles, especially at the beginning of a war and am convinced that a skilful general could make war all his life without being forced into one’.(x) The Duke of Marlborough may have acquired fame and fortune in a campaign of manoeuvre culminating in a decisive battle at Blenheim in 1704, but the war of Spanish succession continued for another nine years, and in that time Marlborough conducted more sieges than he fought battles. Frederick the Great’s battlefield masterpiece, his defeat of the Austrians at Leuthen in 1757, was fought at the outset of the Seven Years War. It saved Prussia but it did not end the war, which carried on until 1763.

In the eighteenth century the verdict of battle could be decisive because both sides, the victors and the defeated, accepted the outcome as conferring legal rights. In the early Christians’ just war tradition success in battle reflected divine approval. In the transfer to a more secular order mediated through the writings of Hugo Grotius in 1625, Thomas Hobbes in 1651 and Emer de Vattel in 1758, faith and natural law were replaced by customary international law. But those laws depended for their application on shared perceptions of the limits which the users of war might be expected to observe.(x) That changed with the French Revolution and the use of successive battles on different fronts. Carl von Clausewitz revealed the duality of thinking that France’s military primacy forced on those who opposed Napoleon. Clausewitz placed a chapter on the decisive battle – Die Hauptschlacht –Ihre Entscheidung – at the centre of book IV of On War, that on combat. But he then wrote in book VI, devoted to defence, that ‘a state must never assume that its country’s fate, its whole existence, hangs on the outcome of a single battle’.(x) Clausewitz was looking to guerrillas, insurgents and even terrorists to sustain national resistance beyond the battlefield and its clash of formed armies.

The perception of battle and what it could achieve was transformed in 1815: Waterloo seemed to validate the argument that victory, a tactical outcome, could be decisive at the strategic level. The French Revolutionary and Napoleonic Wars were bundled together by nineteenth-century military analysts in very different ways from the way in which we now see the Hundred Years War, the Thirty Years War, the Nine Years War or the Seven Years War. Rather than being seen as the ‘Twenty-Three Years War’, a single war of exhaustion, they were treated as a series of independent campaigns, each culminating in a decisive battle, from Valmy in 1792 to Marengo in 1800, from Austerlitz in 1805 to Jena in 1806. Waterloo itself was the conclusion not to a long war but to a campaign that had lasted only a hundred days. The armies of Europe were persuaded that what they did on the battlefield determined the outcome of a war, not what the people did or suffered, and not economic exhaustion or popular uprising or revolutionary guerrilla war.(x) Chronologically, Waterloo’s status was unimpeachable: it consolidated a peace settlement which ensured comparative European order for all but a hundred years. But that peace rested on a longer and more widely experienced memory than the events of one day in June 1815: on the fear of revolution which war had promoted, and on the depredations and suffering which the French army had brought in its wake. There was a paradox here: if twenty-three years of conflict were a case against war, the perception of Waterloo was an argument for the use of battle.

Waterloo formed the climax to Edward Creasy’s best-seller, Fifteen decisive battles of the world, published in 1851 but required reading in innumerable editions for the next fifty years. The book began with Marathon, and showed the enduring influence of battles on what Creasy called ‘our own social and political condition’. The idea of the decisive battle, that tactical success can shape strategic and political outcomes, cast a long shadow. In France, it was brought up to date in 1913 by Jean Colin, whose Les grandes batailles d’histoire, was published in French and English editions in 1915. It survived the First World War. In 1923, F. E. Whitton emulated Creasy in his choice of title in a successor volume called Fifteen decisive battles of modern times, although he stopped on the Marne in 1914 and so did not embrace the challenges which the subsequent fighting generated for the idea of the decisive battle. The Marne was at one level indubitably a decisive battle. It saved France and committed Germany to a protracted war on the western front for which it was not economically qualified. The Marne, therefore, would ultimately play a major part in Germany’s eventual defeat, but it prolonged the war, not terminated it.(x)

Three other veterans of the First World War were not as perplexed as Whitton about how to use battle to explain the outcomes of twentieth-century warfare. Basil Liddell Hart, J. F. C. Fuller and Cyril Falls, all distinguished military writers, contributed to the genre of battle history in ways which straddled both world wars. In 1929 Liddell Hart wrote The decisive wars of history: a study in strategy. Although the book acknowledged the place of grand strategy, which was ‘to decide whether strategy should make its contribution by achieving a military decision or otherwise’, that was not the focus of Liddell Hart’s attention. His focus was on what he called ‘pure strategy’, or the art of the general. He rejected the idea that the general’s sole aim was battle, but he still left either the decisive battle or its threat at the heart of strategy: the general’s ‘true aim is not so much to seek battle as to seek a strategic situation so advantageous that if it does not of itself produce the decision, its continuation by a battle is guaranteed to do so’. He gave a list of decisive battles in history, in ‘almost all’ of which ‘the victor had his opponent at a psychological disadvantage before the clash took place’.(x) Waterloo was not among them but he still treated it as a decisive battle. He argued that well conducted operations, characterised by what he called the ‘indirect approach’, would culminate in a ‘decisive’ outcome, and that would normally be on the battlefield. Waterloo was decisive for Liddell Hart, because Napoleon adopted what Liddell Hart called the direct approach and the Prussian commander, Blücher, the indirect one – or the ‘line of least expectation’.(x)

The decisive wars of history proved to be Liddell Hart’s most enduring work, regularly revised, rebranded as Strategy: the indirect approach, and subject to a sustained attempt to make it still relevant to the nuclear age up to 1964. A decade before, Fuller had re-launched his career as a military historian, with the first of what were to become three volumes called The decisive battles of the western world and their influence upon history; it too has had a successful publishing history. Finally, Cyril Falls, formerly professor of the history of war at Oxford and an erstwhile official historian of the First World War, edited a glossy volume, Great military battles, also in 1964. Fuller concluded with Leyte Gulf and Falls with the Ardennes, both battles which contributed to the defeats of Japan and Germany, even if neither surrendered as a direct consequence of either of them.

Battle no longer delivers a result: the changing relationship between tactics and strategy

Even more revealing of the lessons derived from the wars of 1792-1815 were nineteenth-century readings of Clausewitz’s On War. The central preoccupation of On War is less the relationship between war and policy and more that between battle and war, or between tactics and strategy. Clausewitz’s definition of strategy as the use of the battle for the purposes of the war understood the task of strategy as being to convert tactical into strategic success. The side which held the advantage at the end of the battle had to exploit it by pursuing and annihilating the enemy. In other words, Clausewitz did not claim that battle was decisive in itself. He had fought with the Russians against the French at Borodino in 1812 and had seen how what today are called ‘symmetrical forces’ could negate each other. But many nineteenth-century military theorists inverted Clausewitz’s argument. They said that the aim of manoeuvre was not to exploit the battle after it had been fought and so achieve a decision; instead the aim was to bring the enemy to battle because the battle was itself decisive.

Colonial warfare encouraged this sort of thinking. The native populations in countries outside Europe possessed strategic advantages: they knew the terrain and the climate, and they were considered likely to be more resistant to local diseases and infections. European armies offset their strategic inferiority with the tactical advantages of discipline, order and firepower. The key message of manuals like Charles Callwell’s Small wars (first published in 1896) was the need for the colonial power to seek battle as soon as possible, in other words to use tactical advantages to overcome strategic disadvantages.

Because the enemy would probably lack the form and discipline of a European army, he would prefer guerrilla war to seeking battle. So the colonial army had to achieve victory not by destroying such order as the enemy forces possessed, as had happened in the pursuit of an enemy army after a Napoleonic battle, but by killing them. Clausewitz had used the word Vernichtung, or annihilation, to describe what happened to an army in the pursuit after a battle. He made clear that what he meant to convey was the process by which an army became a rabble and so ceased to exist. Today, however, Vernichtung carries connotations of genocide. Isabel Hull called her study of the German campaign in south-west Africa (present-day Namibia) in 1904 Absolute destruction, because there – in a non-European context – the German army did embark on the whole-sale eradication of the Herero people. Hull sees a link between what happened in Namibia and what the German army went on to do to the civilian populations in Belgium and northern France at the outset of the First World War.(x) The connection is tendentious: other European armies that waged colonial war did not commit atrocities in Europe, and one army which had no colonial experience, Austria-Hungary’s, brought terror to Serbia in 1914. More important in their own thinking was a distinction all European armies made in connection with colonial warfare: they differentiated between what they called a civilised enemy, that was an enemy who respected the laws of war and took prisoners, and an uncivilised enemy, who did not. After he retired as chief of the Prussian general staff at the end of 1905, Alfred von Schlieffen wrote a series of studies of Hannibal’s victory over the Romans at Cannae in 216 BC. This was another decisive battle which did not actually decide the war, but it did embody Schlieffen’s idea of a battle of annihilation. Schlieffen’s point was that such a battle of annihilation was still possible, although clearly it would be fought under different conditions from Cannae. One of those differences (and he was of course referring to European warfare against ‘civilised’ opponents) was that ‘capitulations have taken the place of slaughters’.(x)

By 1914 many twentieth-century generals had come to believe, in a way that many eighteenth-century generals had not, that battle was inherently decisive and that strategy existed to make possible a tactical decision not to exploit the tactical events which constituted fighting. That being the case, the key was to get the enemy to commit himself to battle. Both sides set out to do that in the First World War, but no decision resulted. Instead tactics trumped strategy. When the war ended in 1918, it did so with a whimper more than a bang, without a decisive military outcome in the sense that had been understood before the war. The German army was still in France, intact and claiming it was undefeated; it was certainly not annihilated in either Carl von Clausewitz’s or Isabel Hull’s sense. It was therefore hard for traditional strategy to see the connections between the culminating events on the different fronts and the peace settlements negotiated at Versailles. This was particularly so in Germany’s case but it applied even in theatres where it could be argued that there had been decisive battles, Vittorio Veneto in Italy, Mosul in Mesopotamia and Megiddo in Palestine.

The pursuit of unconditional surrender

Therefore, by the 1930s the link between the battlefield and the peace settlement had been broken. This was not just the product of the German army’s argument, that it had not been defeated on the battlefield. It was also fed more widely, beyond Germany, by the sense that the victory of 1918 had not led to peace. This frustration with the failed ambitions of the 1919 peacemakers underpinned the adoption of unconditional surrender in the Second World War. On 24 April 1943, at the conclusion of the Casablanca conference, Franklin Delano Roosevelt surprised Winston Churchill by publicly announcing that the policy of the allies would be to seek the unconditional surrenders of the Axis powers. He then clarified his remarks by citing the Confederate surrender at the conclusion of the American Civil War as precedent. But he was not being completely frank. Unconditional surrender may have been a formula which held together an alliance possessed of incompatible post-war objectives, but its drivers were as much retrospective as prospective. As Paul Kecskemeti put it in 1958, ‘Instead of planning to settle the problems germane to World War II, they [the allies] resolved to end it by doing everything that would have been needed to prevent it from breaking out’.(x)

So unconditional surrender was a formula that endeavoured to roll together the military process of capitulation on the battlefield and the political decision to end the war. It was in some ways a reversion to the patterns of ancient and medieval warfare. The challenge which it confronted was the lack of a political actor who was legitimate in the eyes of the allies and with whom they could therefore deal, even assuming the Germans were ready to negotiate. In the ancient world this political impasse would have been resolved by the destruction of the enemy’s fighting capacity, by Vernichtung, or annihilation, by the triumph of the military means of war over its political. However, the two world wars do not provide much evidence that mass surrender was, in its own right and independently of other variables, sufficient to cause state collapse.(x)

By the end of the eighteenth century, individual belligerents were beginning to acquire rights. A prisoner of war was no longer the possession of his captor but had his own status. He was to be fed and accommodated during the war, and returned to his home at the war’s end. This principle was acknowledged by the leaders of the French Revolution, and was embodied in the Lieber code in the American Civil War, the Brussels rules of 1874 and the Hague conventions of 1899 and 1907. It potentially changed the relationship between the commander and those whom he led, giving the latter the opportunity, if they could, to renegotiate the terms on which they fought. In the First World War, French soldiers, told to fight to the last man and the last round, did not do so, and their generals accepted that fact. What was required was the need to have made a decent showing, not to have followed orders literally.(x) Other forms of collusion between commanders and those they commanded relaxed the code of courage and obedience. After the French army mutinied in 1917, their commander in chief, Philippe Pétain, imposed punishments which were less severe than those meted out earlier in the war, and sought to conciliate more than dragoon his men.(x) His British colleague, Douglas Haig, similarly realised that he could not expect a citizen army made up of conscripts to adhere to the disciplinary norms of the pre-war regular army: as the war progressed, the British army, like the French, carried into effect proportionately fewer death sentences for desertion in the face of the enemy. In the German army, precisely because the law demanded death for desertion, without (as in the British case) scope for leniency, military courts preferred to try soldiers on the lesser charge of absence without leave.(x) In the Second World War, some commanders went one stage further, actually leading their armies into captivity in ways that had not been seen since the ritual surrenders of cities in the age of Vauban. At Stalingrad Friedrich Paulus did so in 1943, much to Hitler’s fury, and so did the British general, Arthur Percival, at Singapore in 1942.

Collective surrender avoided the useless waste of life and was rationalised on this basis. It was deemed prudential and could even be honourable, at least on humanitarian, if not national, terms. Those who did not surrender, like the Japanese in the Second World War, were no longer deemed courageous, as were the 300 Spartans who had died holding the pass at Thermopylae against the Persians in 480 BC, but fanatical and even sub-human. They were denigrated by their enemies, not lauded.(x) But mass surrenders, including those at Stalingrad and Singapore, were not in themselves decisive for the outcome of the war. The same point can be made about the First World War. What was remarkable was how an army could be defeated in a ‘battle of annihilation’ but its nation could still fight on: in Russia’s case after the defeat of its 1st and 2nd armies at Tannenberg in 1914 or after its territorial and manpower losses in the ‘great retreat’ in 1915; in Italy’s after the Austro-German breakthrough at Caporetto in October 1917; or even in Britain’s, given the very high proportion of prisoners of war as opposed to killed and wounded among the casualties suffered by the 5th Army when the Germans attacked on the Somme on 21 March 1918. Manpower loss on the battlefield sufficiently great to be described ‘annihilating’ and even ‘decisive’ in purely military terms did not translate into political effect. The collapse of the British Expeditionary Force in France in 1940 or the high numbers of those who deserted from the British 8th Army in the summer of 1942 did not prevent Britain ending up on the winning side in the Second World War. By contrast the Japanese rarely surrendered but still lost.

When the medieval economic historian, Marc Bloch, addressed the defeat of France in 1940 – yet again a decisive German victory which did not in fact decide the outcome of the war as a whole – he called it a ‘strange defeat’. He equated individual surrender with the surrender of the nation. Bloch was a Frenchman, French patriot and French nationalist, a man for whom the nation in arms was a living reality (he had served France in two world wars). For him the surrender followed not from the annihilation of the army but from a collective decision by the French people. Herein was the other approach which could resolve the political impasse created by unconditional surrender. The people, rather than the government, could be treated as the legitimate party to surrender. The idea of collective responsibility rested as surely on the democratising legacy of the French Revolution as do the legal rights of prisoners of war, captured performing their military duty as citizens. But it also assumed the capacity of strategic effects in war either to separate the people from their government so that they could act independently, or to ensure that they could bring overwhelming pressure to bear on their government, even to the point of revolution.

Economic exhaustion and the road to peace

After 1918 many Germans, supported by several British commentators, including Liddell Hart, argued that the allied blockade of Germany had precipitated the German revolution of November, and that the revolution was what had caused the defeat of Germany. Both were and remain hotly debated propositions, but they suggested that the German people had been persuaded by the effects of the war to topple an autocratic regime in order to replace it with a government that was peace-minded and even potentially democratic. Herein was a different notion of the relationship between war and revolution from that entertained after 1815: then the French revolution was seen to have promoted and intensified war, rather than stopped it in its tracks.

Much allied propaganda in the First World War rested on the presumption that the Kaiser, helmeted and moustached, was the embodiment of German militarism, whereas the German people were potentially liberal democrats. Given that the socialists had formed the largest party after the Reichstag elections in 1911, even if they had not won an overall majority, the presumption was not without foundation. In the Second World War, the strategic bombing offensive, like the blockade in the First World War, used a similar mixture of stick and fairly inedible carrot. The German people were to be persuaded through the bombing of their homes and cities to become angry not with the allies, who had inflicted terrible suffering on them, but with the man indirectly responsible for the allied actions, Hitler himself. In both world wars inadequate allied understanding of the dynamics of the relationship between government and people rested on preconceptions that were as culturally conditioned and as driven by uncertain intelligence as those that shaped allied behaviour and expectations in relation to Iraq in the wars of both 1990-91 and 2003-11, and to Afghanistan in the war waged there between 2002 and 2021. The bonds between Hitler, the German army and the German people proved much more resilient than any of the calculations resting on the presumption either of a generals’ coup or of a popular uprising directed against the regime.

As the Cold War became entrenched, the perception grew that the pursuit of unconditional victory had not in fact produced a worthwhile peace in 1945. It converged with the implications for the conduct of war of the adoption of nuclear weapons. By the mid-1950s the possibility of an all-out nuclear exchange suggested that a future war would produce no victory that was worth the name for either side, let alone leave in place governments able to negotiate the terms of a peace settlement. The move from strategic effect to political resolution would itself be impossible because there would be no governments to oversee the move from military to political effect.

The pursuit of right effect

As a result the just war tradition became less focused on just cause and more on right effect. Could you claim, if and when you embarked on a war, that you would leave the world in a better place when the war was over? Most strategic thinkers of the nuclear age answered no, but that response in itself energised those who argued for a concept of victory within a nuclear war. If it were the case that much of nuclear deterrence theory rested on foundations that were immoral, because it could not imagine a right effect emerging from the actual use of nuclear weapons, then the solution was not to abandon nuclear deterrence but to find a form of nuclear war which produced a positive outcome. Moreover, deterrence had to rest on a credible threat to go to war to be truly effective. Finding a way for nuclear weapons to deliver right effect would therefore also make them more useable and so would enhance deterrence.(x)

It would be easy to draw a straight line from the 1950s to the present day. Doubts about the validity of victory within war have outlasted the end of the Cold War and the declining salience of nuclear weapons, and have been projected onto the wars in Iraq and Afghanistan. Doctrine writers in the wake of post-9/11 wars have disputed the relevance of victory as a concept. They have skirted the issue by not addressing it or defusing its impact with less evocative or specific descriptors. By not addressing victory, they could also refuse to acknowledge its corollary – defeat. Victory in the post-9/11 wars became defined in terms of creating conditions sufficiently secure to enable the intervening powers to get out of the countries they had invaded.

The phrase, an ‘exit strategy’, was coined to meet the pressures, domestic as well as economic, to end the protracted conflicts in Iraq and Afghanistan. It confused means and ends. Surely an exit couldnot be the political object of the war? If it cannot be, an exit strategy creates uncertainty as to what the real objective is. On 24 October 2011 Oliver Letwin, minister of state in the British Cabinet Office, was asked about the United Kingdom’s strategy in Afghanistan, when giving evidence to the United Kingdom’s Joint Parliamentary Committee on the National Security Strategy. He responded that it was a matter of balancing the need to help the Afghans themselves to stabilise their country and ‘on the other side, the extent to which our presence might become part of the problem rather than part of the solution’. So the resulting strategy was expressed not in terms of British interests in Afghanistan or Pakistan, but in terms of a timetable: ‘We balanced those out and came to the view that we had to set a date that was not very far out but, on the other hand, was far enough so that it could be done in an orderly and proper fashion’.(x) An answer which had begun with a reasonable strategic objective, the stabilisation of Afghanistan, had moved to a different objective, the most sensible timing of British withdrawal. An exit may have terminated the conflict for the intervening powers but it did not amount to conflict resolution for the inhabitants of Afghanistan.

Bringing victory back in

The concept of victory is too important within war for those who wage it to be comfortable with outcomes that ignore it. Even during the Cold War, NATO armies revived the concept in the 1980s, despite the presumption that conventional operations were only a step on the ladder to an eventual nuclear exchange. In non-nuclear war, and at the tactical level, gaining or holding ground, or winning a fire-fight when your own survival and the lives of your immediate comrades are at stake, remained as applicable in Helmand as it did on the beaches of Normandy. Here victory has meaning. In the United States army in the aftermath of the Vietnam War, and in north Germany as the British and German armies considered how to meet a massive Soviet invasion by operational counter-strokes, the tactical idea of victory had two purposes. First, it reinvigorated the morale of NATO armies and air forces, particularly those of the United States, and secondly it gave them a positive form of war into which to sink their intellectual teeth. Its results became known as ‘airland battle’ and in due course grew into a whole body of operational thought around the idea of manoeuvre warfare. Those concepts, designed to create counter-offensive options in northern Europe in the 1980s, were applied with stunning success in the first Gulf war in 1990-91, and were then refined and perfected in the 1990s particularly through the incorporation of advanced technologies. Successive constructs for the operational level of war followed, the ‘revolution in military affairs’, ‘network-centric warfare’, ‘transformation’ and ‘effects-based operations’. In May 2003, at what he concluded was the end of the second Gulf war, President George W. Bush was able to put the cap on this current of military thought by declaring ‘mission accomplished’ from the flight deck of USS Abraham Lincoln.

In 2003, as in 2002 in Afghanistan, a short campaign had been crowned with a decisive victory. Iraq seemed set to join the German wars of unification of 1866 and 1870, the war in the Falklands in 1982 or even the Kosovo campaign of 1999 as an example of how war could fulfil the ends of policy, at least for the victors. In practice victory understood in a military sense had trumped a proper appreciation of its strategic outcomes. The same confusion also dogged Bush’s approach to the ‘global war on terror’. By October 2005, when it was clear that the mission in Iraq had not in fact been accomplished, and when the mistake of conflating operational brilliance with political effect was recognised by most observers, it was time to recalibrate the strategic narrative. But Bush still applied to the war on terror vocabulary appropriate to the Second World War. ‘Against such an enemy’, he said, invoking in the minds of his audience ideas like total war and unconditional surrender, ‘there is only one effective response: we will never back down, never give in, and never accept anything less than complete victory’.(x)

Such a conflation of military success and political surrender, a legacy of the notion of decisive battle, is not simply delusional in relation to the war on terror or in counter-insurgency campaigns; it is rare in all war. If anybody had read the thoughts of Chairman Mao to George W. Bush in 2005, the president would not have been listening. Rapid victory, Mao tse-tung wrote in On protracted war in 1938, ‘exists only in one’s mind and not in objective reality… it is a mere illusion, a false theory’.(x) Mao’s war in China, a conflation of a national war against the Japanese and a civil war against the Kuomintang, lasted twenty-three years. For every short campaign culminating in decisive battle, there have been many more where the apparently decisive battle, as in May 2003, was followed by further campaigns and even ultimate defeat.

Measuring success: the pitfalls of quantification

In protracted wars, exhaustion rather than annihilation becomes the means to strategic effect. The denouement can still be decisive, as it was for Mao or as it was for the allies in 1945. But if the exhaustion is mutual, the war more logically ends in negotiation, as each side moderates its terms to meet the demands of the enemy. More importantly, during the war the military means, because they become the indicators of relative advantage, of ‘progress’ in the war, can overtake the political objectives. Data collection, measurements designed to assess effectiveness, creates its own targets. On 6 January 1915, as Britain began its second year of fighting in the First World War, Sir Charles Callwell, the Director of Military Operations, reckoned that the German army’s losses, because it was fighting on two fronts, were twice those of the allies, and concluded that it would not be able to sustain its current strength for more than six months. Callwell had produced this calculation on the orders of the secretary of war, Lord Kitchener, who then told the British war council that Germany would be exhausted by early 1917.(x) Callwell’s memorandum was just the beginning: such thinking increasingly dominated allied counsels. The best known of the British army’s trench newspapers, The Wipers Times, satirised what came to be called the strategy of attrition. Assuming a total fighting population in Germany of 12 million, that of them 8 million were dead or being killed, and that I million were in the navy, it concluded that only 3 million had to be accounted for. ‘We can write off 250,000 as temperamentally unsuitable for fighting, owing to obesity and other ailments engendered by a gross mode of living. This leaves us 500,000 as the full strength. Of these 497,240 are known to be suffering from incurable diseases, of the remaining 600, 584 are Generals and Staff. Thus we find that there are 16 men on the Western Front. This number I maintain is not enough to give them even a fair chance of resisting four more big pushes, and hence the collapse of the Western Campaign.’(x)

In not dissimilar ways, the metrics of the body count came to dominate the United States Army’s presentation of its own success in Vietnam. The sort of wishful thinking lampooned by The Wipers Times created political pressures on the Military Assistance Command in Vietnam to inflate figures. Collecting accurate information on all aspects of the enemy’s performance in a complex counter-insurgency campaign was a massive exercise; by 1967 the US Army in Vietnam was producing reports which totalled 14,000 pounds in weight each day. There was more information than could be properly assimilated, so much so that the numbers of the enemy that tactical commanders reported their units as having killed became by default the most obvious objective measurement. But this was not the heart of the problem. The real issue was that the numbers of enemy dead, even if accurate and even if rising, were not necessarily the best index of progress in the war, particularly in areas of effectiveness that were not so susceptible to quantification.(x)

The land campaigns of the First World War and of the war in Vietnam were not exceptional in these respects in twentieth-century military history. Quantification is an obvious by-product of the application to war both of economic mobilisation and of the discipline of economics. In the war at sea in 1914-18, the efforts to measure the progress of economic warfare through the blockade of the Central Powers created similar pressure for statistical measurements. The easily quantifiable effect was the decline in food imports available for the civil population. However, that bald figure in itself revealed nothing about the response of German agriculture to meet the deficit, the effectiveness of food distribution networks within Germany, or the pressure to alter diets with their possible nutritional consequences. The blockade of food imports did not stop the German soldier being first in the queue for food, and so economic warfare had little appreciable effect on the operations of the army in the field.(x) ‘What, indeed, could be more frivolous’, the British official historian of the blockade, A. C. Bell, wrote, ‘than that the British and French fleets; the whole diplomatic service of the allies; the bureaucracy of Whitehall; and the most talented men that could be recruited from our universities, law schools and business houses, should combine, for four whole years, to execute an operation of war against hospital patients; to increase the sufferings of phthistic, asthmatical and bronchitic persons; and to raise the number of women who miscarry in childbed?’(x)

Bell’s answer to his own question was to deflect the argument from the quantifiable to the unquantifiable: the effect of the blockade was to be measured by its effects on the morale of the German nation. Advocates of air power have regularly found themselves falling back on comparable arguments. During the Second World War, the target list of German cities drawn up by Bomber Command fostered the illusion that science was being applied to strategy, and that a point would be reached when German industrial production would collapse. In fact it peaked in the summer of 1944, over two years after the air offensive had begun. The air power argument became a counter-factual one: that production would have been even higher without the bombing.(x) In the aftermath of the Iraq invasion of Kuwait in 1990, air power theorists argued that the first Gulf war could be won through aerial attack. The collapse of the Iraqi army within a hundred hours of the onset of ground operations seemed to confirm the argument that ‘Operation Desert Storm’ had delivered its objectives. It had not, in that it had not decapitated the Iraqi government, and it had not destroyed a field army, both claims advanced at various stages of its planning and execution, but debunked by the official Gulf War Air Power Survey in 1993.(x) The theorists of air power responded by following Bell’s precedent: they abandoned the use of metrics and assessed the effect of Desert Storm in terms of collapsing morale.

In the war in Afghanistan the figures for poppy production and the numbers of Taliban leaders killed in attacks by special forces or by unmanned aerial vehicles were used in similar ways. They created targets whose destruction provided indications of success in their own right, and so the means to wage war became ends in themselves. As they did so, they shaped the political objectives and so distorted the expected outcome of the war. Once again, war – and particularly protracted war – came to master policy.

War can change the objectives of policy

Whoever imagines that Clausewitz’s norm, that war is the continuation of policy by other means, reflects a consistent reality has not read much military history. If states or any other organisations go to war in order to fulfil specific policy objectives, then they need to be much more aware than they ever appear to be that war typically changes policy. Even if wars are decided in short order, very rarely are they continuations of the policies which brought them into being, particularly if the terms on which they end are used as criteria. What is at stake here is not what Clausewitz wrote, but our selective and ill-applied understanding of it. Clausewitz was not primarily addressing the causes of war. His focus was on the inter-active and dynamic relationship between war and policy once war had begun. His anxiety, and one which those who aspire to employ war in the pursuit of policy should share, was to direct war so that it would be of use. The fact that war is bloody and destructive is a very good reason for being cautious about beginning one. However, once we are engaged in war, its costs are precisely what behove us to focus on its utility – to apply it for the pursuit of policy, and so to bend its inherent chaos towards rational objectives.

The great crisis in Clausewitz’s life, or at least in his life as a military theorist, came in 1827, when he realised that the foundation stone of his theory, his own military experience in a war of national survival for Prussia, and what he called the ‘absolute’ wars of Napoleon, was not the only conflict that had occurred in the history of mankind. Other wars, including those in which his father had fought under Frederick the Great’s command, showed different characteristics. So his theory of war needed to encapsulate more limited and contained forms of war. His resolution of this crisis was a more mature theory that identified two forms of war and gave them a common identity through their shared relationship to policy. Clausewitz was one of the first writers to formulate a theory of limited war, and he inspired Hans Delbrück in Germany and, even more directly, Julian Corbett in Britain before the First World War, and then Robert Osgood in the United States after the Korean War. The latter’s arguments underpinned the United States’s initial commitment to Vietnam and, as a consequence, defeat there discredited them.

As a result, neither the United States nor its allies have really appreciated that they have been fighting limited wars ever since. The wars in Iraq and Afghanistan were geographically confined, and also constrained in terms of resource commitments and levels of national mobilisation. Less clear is whether they were limited in their political objectives. They were presented as part of a global war on terrorism and universalised as struggles for democracy, liberal values and human rights. For many in the west the war in Afghanistan was a war for the promotion of beliefs we hold to be central to our humanity, those of religious freedom, legal rights and political liberalism. These beliefs echoed Roosevelt’s four freedoms of January 1941 and so elevated the necessity of the war and seemed to make it a continuation of policy. But it is just that elevation, manifested in the focus on the just war tradition, and specifically on ius ad bellum (or the international law relating to war’s initiation), which in the global war on terror radicalised the west’s reasons for fighting, made it hard to negotiate or compromise, or even to find an ‘exit strategy’, and which perversely ensured that war continued to direct policy. Because their objectives were far greater than the means that the allied governments were prepared to devote to them, their conduct of both wars became incoherent. Policy was not able to provide a unifying and directing influence, and so the wars themselves became even more influential in shaping policy than they might naturally have been.

Balancing means and ends: the war in Ukraine

The responses of both the United States and NATO to Russia’s full-scale invasion of Ukraine in February 2022 and after reflected the legacy – and the unlearnt lessons – of Afghanistan and Iraq. In this case the Western states had no difficulty in embracing the justice of Ukraine’s cause and so the rhetoric of the war – one for liberty, democracy and national sovereignty – was more fully representative of the reality than in the ‘global war on terror’. But even if there was greater clarity over the war’s ends, there was still a failure to supply the means that the fulfilment of such ambitious aims required. In many cases, western states reflected their lack of a true understanding of major war, not least because their national security strategies had hitherto ducked the issue by failing to distinguish between ‘hard’ security, especially war, and other forms of threat. Fettered by the lack of stockpiles and wedded to the free-market orthodoxy that munitions production would simply respond to demand, they could not supply artillery shells or short- and medium- range missiles in the numbers and with the speed required. That was not all. Despite their rhetoric of support, Ukraine’s western allies, and especially the US, failed to appreciate the need for victory in a war of national survival. Fearful of the war’s escalation, they allowed Russia to establish escalation dominance by threatening the use of nuclear weapons. The result under the Biden administration was that the US and its allies did enough to keep Ukraine in the fight but not enough for it to win.

Before he succeeded to the presidency, Donald Trump promised to end the war in 24 hours. Rightly, nobody took that undertaking seriously, and by May 2025 the hopes that the United States could bring the war to a conclusion were fading. Trump failed for two reasons. First, he assumed that the advantages of peace were as obvious to the belligerents as they were to him. As he saw it, a settlement would halt the deaths of ‘millions’ of people as well as the destruction of their homes and national infrastructure. He offered economic carrots to both sides to make the case for a deal. But he had begun at the wrong end of the equation, with peace and not with war. He failed to see that negotiations to end the war which began with peace neglected the links between the war itself and the war’s termination. Zelensky responded to Trump by agreeing to a thirty-day cease fire. He did so largely to keep Trump in play but also because he saw that, if the cease fire were extended and the talks ran on, they might help Ukraine recover what it had lost in a way which three- plus years of combat had so far failed to do. Putin, recognising that possession was likely to be nine-tenths of the law, also sought to manipulate Trump, but by calling for an immediate peace settlement, which – given the situation on the ground – was likely to leave Russia master of Crimea and of the four oblasts which it claimed to have annexed. In other words, neither side had renounced the aims which had propelled them into the conflict in the first place and, as a result, war still possessed utility: the war was indeed the continuation of their policies by other means.

The challenge for the western world is whether it can develop an understanding of major war (and indeed of all wars) that, first of all, recognises that wars are likely to be impervious to the third-party intervention of others when both sides have objectives that they are only likely to gain by fighting. All wars may eventually end but they do so only when both sides see that they have more to gain by stopping them than by their continuation. In December 1916 Woodrow Wilson’s call to the belligerents in the First World War to tell him their war aims produced not a first step to peace, despite the massive losses that year at Verdun and on the Somme, but a realisation that there was no common ground between the two sides. Secondly the US and its allies need to move beyond a vocabulary still locked in the legacies of the two world wars and the Cold War and their over-blown triumphalism, and instead adopt one grounded in an understanding of war’s current realities. The naivety of Trump’s peace-making efforts reflected his profound ignorance of the conditions underpinning the war’s continuation in Ukraine and Russia.

Our need is not to renounce war as a political tool, but to think through with much greater rigour and pragmatism the possible consequences of fighting, recognising that war is not a unilateral use of force, but a reciprocal exchange which possesses its own dynamic, and to whose evolution we have to pay constant and sustained critical attention. We must, in Edward Luttwak’s formulation in 1999, ‘give war a chance’ in order to avoid the dangers of a premature peace which will not endure.(x) This is not just a political necessity; it is also a moral obligation, for only thus can we recognise the implications of our actions for the achievement of right effect. If we use war to try to make the world a better place, then that is the doctrine which should govern our deliberations and the decisions which flow from them. And, if that is the aim, we need to fight it in ways that are compatible with that outcome. It may demand more of us than we are prepared for, but if we don’t prepare we shall continue to fail.