Can Complementary Learning Methods Teach AI the Laws of War?

Davit Khachatryan is an international lawyer and lecturer focusing on the intersection of armed conflict, emerging technologies, and international law. 

The Judge Advocate watched the feed from the tactical operations center alongside her commander. The screens, each attended by systems monitors, showed more than a dozen developments unfolding at once. An artificial intelligence (AI) led drone swarm was closing on the front line through the city, coordinating its movements faster than any human pilot could direct, an artificial flock of mechanical starlings like a cloud on the radar. A civilian aid convoy had stalled on the northern approach. An enemy artillery battery was repositioning south behind a residential block. In the nearby valley, friendly units were maneuvering under fire. All these pieces were in motion, lives and vehicles and weapons. The soldiers’ behavior would be determined by interactions between their commander and AI.

The challenge here is not as simple as claiming that AI cannot comply with the principle of distinction under international humanitarian law (IHL), also known as the law of armed conflict. The fog of war complicates decision-making for both humans and machines, but does so in profoundly different ways.

For a human commander, the chaos of the battlefield is filtered through layers of training, doctrine, experience, and instinct. Even when overwhelmed, a person can weigh incomplete facts against their mental map of the situation, recall comparable past events, and fall back on moral and legal anchors. This does not mean humans do not make mistakes; they do, often with serious consequences. But even in error, their reasoning is shaped by caution, hopefully empathy, and the capacity to interpret ambiguous information in light of their own individual understandings of humanitarian obligations.

AI  processes that same chaos as streams of probabilities. Every sensor reading, target profile, and movement pattern is reduced to statistical likelihoods: how probable it is based on the training data that this object is hostile, how urgent its engagement appears, how likely a given action is to produce the “correct” result as defined in training. In its logic, the most probable option is the correct one. Under extreme operational pressure, the AI focuses on the statistically most plausible, while rare possibilities drop toward statistical zero, far less likely to be considered than they would by a human.

This difference in reasoning is why training environments must be built to include not just the probable, but the improbable: those outlandish, once-in-a-century battlefield events that stretch judgment to its limits. For AI, these scenarios must be constructed, repeated, and reinforced until they occupy a permanent place in the machine’s operational vocabulary.

A credible arms control position would be to prohibit or pause the development of certain autonomous capabilities. Nevertheless, this article proceeds conditionally because much of the stack is already fielded (AI-enabled intelligence, surveillance, and reconnaissance triage, targeting support, and navigation), and because dual-use diffusion (commercial drones, perception models, planning tools) makes a clean prohibition hard to sustain. If states continue down this path with minimal international instruments the question becomes how to embed legal restraint so that rare, high-stakes judgments are not optimized away. What follows sets minimum safeguards if development and deployment proceed.

How AI Learns

If AI’s logic is built on statistical reasoning, the way it acquires those statistics determines the boundaries of its thinking. This is true for AI in general, whether in a medical diagnostic tool, a financial trading algorithm, or a targeting system on a battlefield. The patterns an AI recognizes, the probabilities it assigns, and the priorities it sets are all downstream from its training.

In the military domain, an AI’s training determines how it operates in relation to the law of armed conflict and the unit’s rules of engagement: what it accepts as positive identification (distinction), how it trades anticipated military advantage against collateral damage estimation (proportionality), when feasible precautions require warning, delay, or abort, and when uncertainty triggers a mandatory hand-off to a human. The two dominant machine learning paradigms, imitation learning and reinforcement learning, can both produce highly capable systems. Yet without deliberate safeguards, neither inherently preserves the kind of rare, high-stakes judgments that human decision-makers sometimes make under the fog of war, moments when they choose to forego an operational advantage to prevent civilian harm. Statistically, those moments are anomalies. 

Imitation Learning: The Apprentice Approach

Imitation learning (IL) is essentially training by demonstration. The AI is shown large datasets of human decision-making, each paired with the information available at the time. In a military targeting context, this might include annotated sensor feeds, mission logs, and after-action reports: strike approved, strike aborted, target reclassified, mission postponed.

The model’s task is to learn the mapping between conditions and human actions. If most commanders in the dataset abort strikes when civilian vehicles enter the target zone, and there are enough entries of this behavior in the dataset to show that, the model will learn to mirror that restraint. 

IL captures the statistical distribution of decisions in the training data. Rare but important choices, such as holding fire in a high-pressure engagement to comply with proportionality, will be underrepresented unless deliberately oversampled. Left uncorrected, the AI may treat those lawful restraint decisions as statistical noise, unlikely to be repeated in practice. Additionally, because much of the data on which machine learning models reflects past military experience, many AI models will echo the implicit bias shown in the past human decisions on which they train.

A Quadrupedal-Unmanned Ground Vehicle (Q-UGV) goes over rehearsals at Red Sands IEC in the CENTCOM AOR Sept. 18, 2024. (U.S. Army photo by Spc. Dean John Kd De Dios)

Reinforcement Learning: The Trial-and-Error Arena

Reinforcement learning (RL) works differently. Instead of copying human decisions, the AI is placed in a simulated environment where it can take actions, receive rewards for desirable outcomes, and penalties for undesirable ones. Over thousands or millions of iterations, the AI learns policies, decision rules that maximize its cumulative reward. At scale, this training is highly compute– and energy-intensive. That matters because it concentrates capability in a few well-resourced programs, slows iteration and red teaming, and creates pressure to trim the very rare event scenarios that protect civilians and support compliance, while adding a nontrivial environmental footprint. Programs should, therefore, set minimum scenario coverage and doubt-protocol testing requirements that are not waivable for budgetary reasons.

In a military context, this means an RL agent might repeatedly play through simulated scenarios: neutralizing threats, protecting friendly forces, and avoiding civilian harm. The way those objectives are weighted in the reward function is decisive. If mission success is rewarded heavily and civilian harm only lightly penalized, the AI will statistically favor the course of action that maximizes mission success, even if that means accepting higher risks to civilians.

RL’s strength is adaptability. Its weakness is that low-probability events, rare civilian patterns, and unusual threat behaviors will remain statistically insignificant unless the simulation environment repeatedly forces the AI to confront them. 

IL can pass down the shape of human judgment; RL can provide flexibility in novel situations. But each carries a statistical bias against rare, high-impact decisions, exactly the kinds of decisions that can determine the legality and morality of military action. Only by deliberately elevating those rare cases in training, through curated datasets and stress-test simulations, can either method hope to produce systems that behave lawfully and predictably under the fog of war. On the evidence of deployments to date, achieving this level of end-to-end compliance remains out of reach.

Soldiers don the Integrated Visual Augmentation System Capability Set 3 hardware while mounted in a Stryker in Joint Base Lewis-McCord, WA.

The Simulation Imperative

Actual combat records, produced by soldiers in logs, after-action reports, or targeting databases,  are skewed toward the typical patterns of engagement that happen often enough to warrant recording after the fact. Unprecedented and chaotic situations will strain both the law and the system’s decision-making, yet they appear so rarely in historical data that, in statistical terms, they are almost invisible. An AI, left to its statistical logic, will not prepare for what it has seldom seen. 

This is why simulation is the decisive safeguard1. In imitation learning, rare but critical decisions must be deliberately overrepresented in the dataset, so they carry enough statistical weight to influence the model’s behavior. In reinforcement learning, the simulated environment must be constructed so that “once-in-a-century” scenarios occur often, sometimes in clusters, forcing the system to learn how to navigate them. A humanitarian convoy crossing paths with an enemy armored column, loss of communications during a time-sensitive strike, sensor spoofing that turns friend into apparent foe, these cannot be treated as peripheral edge cases. They must be made routine in training.

The more frequently the AI encounters these manufactured crises in simulation, the more space they occupy in its decision-making horizon. If and when similar scenarios arise in operations, the system’s response should not be improvised.

The Lieber Code in the Age of AI

The concept that, in cases of doubt, the commander should err on the side of humanity is not new. It was codified in 1863, when Francis Lieber drafted the Instructions for the Government of Armies of the United States in the Field, better known as the Lieber Code. 

This imperative has repeatedly been encoded under International Humanitarian Law. In the Additional Protocols to the Geneva Conventions2, the obligation to take “all feasible precautions” and to cancel or suspend an attack if it becomes apparent that it would cause excessive civilian harm relative to the anticipated military advantage operationalizes the humane minimum in treaty law. Critically, however, many key decision-making states have not ratified all the precepts articulated in the Additional Protocols. Customary IHL Rule 15 similarly requires constant care to spare civilians and civilian objects, and Rule 19 codifies the requirement to cancel or suspend attacks when doubt or changing circumstances create excessive risk.

Faced with ambiguous intelligence or conflicting imperatives, human commanders can recall a doctrinal anchor and choose that privileges restraint over risk. Even when they err, that error is shaped by a human blend of caution and interpretation of context.

For AI, the same scenario unfolds differently. Without explicit design, there is no natural “humane fallback” in its logic. In the face of uncertainty, an unmodified reinforcement learning policy will still pursue the statistically most rewarding action, and an imitation learning model will default to the most common decision in its dataset. 

This is where simulation and legal doctrine intersect. Embedding the humane minimum into AI means that in every training run, whether through curated historical cases or artificially generated edge scenarios, the option that aligns with humane treatment under uncertainty must be given decisive weight. In imitation learning, that means oversampling “hold fire” or “switch to non-lethal” decisions until they are no longer statistical outliers. In reinforcement learning, it means structuring the reward function so that restraint in doubtful cases earns more cumulative value than aggression, even if aggression sometimes yields short-term operational gains. The aim is not to teach machines to imitate human morality, but to hard-code a structural preference for restraint even and especially when the law is unclear. 

Unmanned Ground Vehicles sketch, The Future Soldier’s Load and the Mobility of the Nation (November 2001), page 7, Gen. Paul F. Gorman, US Army Combined Arms Center
Risks of Omission

Systematic vulnerabilities in decision-making compound in coalition or joint operations. Different states may train their AI systems with different datasets, simulation designs (if any), and legal interpretations. When such systems operate together, the seams between them can become legal blind spots. A particular AI system might abort an engagement that another proceeds with, creating conflicting operational tempos and complicating attribution if civilian harm occurs.

The danger is not limited to catastrophic, one-off mistakes. Over time, small, repeated deviations from IHL in marginal cases, where human commanders might have exercised restraint, can erode the protective function of the law. The result is a slow normalization of riskier behavior, driven not by political decision or doctrinal change, but by the statistical inertia of machine learning models. This is the core paradox: without safeguards, AI systems can become more predictable in some ways, yet less reliable in the moments when unpredictability, when acting against the statistical grain, is essential for lawful conduct.

Finally, military AI does not fail or succeed in complying with IHL by accident. Its behavior is the predictable result of how it is trained, the data it is given, the scenarios it is exposed to, and the rules embedded in its decision logic. How AI functions and the choices it takes is downstream from decisions made by humans in developing, training, and fielding it.

Governance, Audit, and Human Control

Bridging the gap from promising lab results to lawful behavior in the field requires more than good training runs. It needs an end-to-end governance spine that links data, models, code, test harnesses, deployment configurations, operators, and independent oversight into a single chain of accountability. That spine assigns clear decision rights, specifies the artifacts required at each stage, and shows how evidence of compliance is produced and preserved. It starts with curated, documented datasets and explicit problem statements; runs through model specifications, reward functions, and constraint schemas; includes scenario-coverage plans, legal reviews, and red-team evaluations; and culminates in authorization-to-operate, humane control interfaces, and post-incident audits. Every hand-off, data steward to model owner, model owner to system integrator, integrator to unit commander, should be traceable, signed, and reversible. In effect, the system deploys with its own accountability case: a living dossier that ties design choices to legal obligations and links runtime behavior to reviewable logs. Without that spine, even a technically impressive model becomes an orphan in the field, fast, capable, and difficult to supervise precisely when the fog thickens. The pathway from design to deployment rests on a few non-negotiables.

  1. Data governance as policy, not plumbing. If models think with the statistics we give them, then data curation is a legal act as much as a technical one. Training corpora should be versioned and signed; every inclusion and exclusion choice documented; every oversampling decision for restraint labeled with a rationale. That record is what allows commanders, investigators, or courts to see how humane fallbacks were embedded by design rather than inferred after the fact.
  2. Test what you train, and then test against what you didn’t. A system that performs well on its own distribution can still fail in the wild. Beyond standard validation, mandate distribution shift drills: deliberately swap sensor suites, degrade GPS, introduce spoofed friend/foe signals, and remix civilian movement patterns. In each drill, the system should either preserve lawful restraint or trigger a doubt protocol that defers to a human. Where it does neither, the failure should feed back into simulation design and reward shaping.
  3. Non-overridable guardrails in code and command. Constraint layers (identification gates, collateral damage thresholds, no-strike lists) must be technically non-overridable by the model and procedurally difficult to override by humans. If escalation is necessary, require dual-key authorization with automatic logging. The goal is not to box out judgment but to ensure extraordinary actions leave extraordinary traces.
  4. Responsibility matrices are embedded in the system. Every deployed AI component – classifier, tracker, recommender, fire-control interface – should write structured, time-synchronized logs that include model version, data slice identifiers, intermediate confidence values, triggered constraints, and who approved or halted an action. Think of this as a living annex to rules of engagement: not just “what the machine did,” but why it “thought” that was permissible, and who remained on the loop.
  5. Human-on-the-loop that actually has leverage. Meaningful human control is not a checkbox; it is the ability to intervene in time with understanding. Interfaces must surface uncertainty (not just a single confidence score), show near-miss counterfactuals (“if civilians are within X meters, the system will abort”), and offer safe, low-latency actions (pause, shadow/track, switch to non-lethal). If the only human interaction available is “approve” under time pressure, control is nominal, not meaningful.
  6. Coalition interoperability without legal dilution. Joint operations will mix systems trained on different data and doctrines. Interoperability standards should cover not only communications and formats but also minimum legal behaviors: shared constraint schemas, common doubt thresholds, and audit fields. The safest path is least-common-denominator legality: when systems disagree under uncertainty, the coalition default is restraint.
  7. Pre-deployment red teaming and post-incident review. Before fielding, require adversarial evaluations by teams empowered to break things, reward hacking hunts, “blinking target” scenarios, and deception trials. After any incident with potential civilian harm, pull the synchronized logs, reconstruct the model’s decision path, and replay counterfactuals to see whether humane fallbacks would have triggered with slightly different inputs. Treat these reviews like flight-safety boards: technical, blameless, relentlessly corrective.
  8. Make restraint measurable. What we measure, we secure. Track deferred engagements under uncertainty, rate of doubt-protocol activations, guardrail trip frequency, and time-to-human-intervention. Trend them over time and across theaters. If these metrics decay as models “improve,” it’s a warning that optimization is outpacing law.

In combination, these measures transfer human judgment (IL), secure robustness under uncertainty (RL and simulation), and institutionalize restraint via governance, constraint architectures, and independent audit, so that compliance is an engineered property rather than an assumption. The result is a verifiable accountability chain, datasets that show why restraint was learned, reward functions that make it valuable, guardrails that make it non-optional, and logs that make it reviewable. And because what we measure we secure, the system ships with metrics for doubt-protocol activations, deferred engagements, and guardrail trips, so commanders can see whether lawful caution is holding under stress. Only then does lawful behavior become the default under pressure, an engineered property of the system, rather than a hope we place in the gaps between probabilities and intent.

The autonomous system, Origin, prepares for a practice run during the Project Convergence capstone event at Yuma Proving Ground, Arizona, Aug. 11 – Sept. 18, 2020. Project Convergence is the Army’s campaign of learning to aggressively advance solutions in the areas of people, weapons systems, command and control, information, and terrain; and integrate the Army’s contributions to Joint All Domain Operations. (U.S. Army photo by Spc. Carlos Cuebas Fantauzzi, 22nd Mobile Public Affairs Detachment)

Growing a Governance Spine

Military AI will not “grow into” compliance with the law of armed conflict. It will do what it is trained, rewarded, permitted, and audited to do. In the fog of war, humans and machines both falter, but in different ways. Human commanders can depart from statistical expectations to privilege restraint; unmodified systems, bound to their learned probabilities, will not. That is why the humane minimum cannot sit at the margins of development. It has to be engineered into the center of learning, testing, and command.

Imitation learning can transmit judgment; reinforcement learning can build adaptability; simulation can force the improbable to be routine. Around that technical core, a governance spine, constraints that do not yield under pressure, doubt protocols that default to caution, signed datasets and reward functions, synchronized logs and metrics, turns legal aspiration into operational behavior. In coalitions, common constraint schemas and reviewable audit trails keep interoperability from becoming a legal blind spot.

At this point, two mistakes will sink this project: treating compliance as a software patch added after performance, or assuming that speed and scale will eventually smooth away edge cases. They will not. The edge cases are where the law does its most important work.

Compliance with the law of armed conflict must be an engineered property of the system: competence built through training, judgment transferred via imitation learning, robustness under uncertainty secured by simulation, and a non-derogable humane floor enforced by constraints and audit. What ultimately matters is evidence, datasets, reward functions, constraint triggers, and synchronized logs, showing that restraint prevailed when uncertainty was greatest. Only on that basis can militaries credibly claim that lawful conduct remains the default under operational pressure.


1Where states choose to pursue development and fielding, simulation is the decisive safeguard. A different policy path is to forgo development or to prohibit particular applications outright.

2Articles 57(2)(a)(ii) and 57(2)(b)).

Enforce the War Crimes Act Against Americans Who Committed Them In Gaza

Abdelhalim Abdelrahman is a Palestinian-American political analyst, host of the Uncharted Territory Podcast and a Marcellus Policy Fellow alum at the John Quincy Adams Society advocating for a restrained U.S. foreign policy in the Middle East centered around American laws and respect for Palestinian human rights.

Following the October 7th, 2023 attacks, Israel embarked on a series of military operations that human rights organizations, legal experts, and U.N. special rapporteurs recognize as constituting genocide. While allowing a trickle of aid into Gaza, Israel’s government has prevented the United Nations from delivering aid, instead disbursing lifesaving supplies through the Gaza Humanitarian Foundation (GHF), created and backed by the U.S. Department of State and Israel. However, in creating the GHF, Israel and the United States have enabled U.S. security contractors to participate in war crimes alongside the IDF against Palestinian civilians in Gaza.

In July 2025, U.S Army Veteran, Anthony Aguilar, blew the whistle on the GHF, reporting he saw both American contractors and the Israeli Defense Forces (IDF) indiscriminately shooting at Palestinians near aid sites. Aguilar described the aid sites as “death traps” for Palestinians. He is not alone in this assessment. The United Nations estimates that over 850 Palestinians in Gaza have been killed around GHF distribution sites, either by American subcontractors or the IDF. In August, CBS News interviewed another GHF whistleblower under the alias “Mike,” who recounted American subcontractors and IDF deliberately targeting Palestinian civilians near aid sites. 

These subcontractors are not the only U.S. citizens complicit in war crimes in Gaza. The Guardian published a report in September on Daniel Rabb, a U.S.–Israeli citizen from Chicago operating as a sniper in Gaza. Raab is part of the Paratrooper Unit 202, a sniper division of the IDF, for which Raab’s parents helped fundraise over $300,000. 

Under the 1996 War Crimes Act, Congress and the Department of State have the authority to investigate and charge citizens and dual nationals who facilitate war crimes. While federal level action is unlikely under the Trump administration, members of Congress, civil society, and groups leading strategic litigation should press to use U.S. law to hold both the Israeli government and individual perpetrators accountable.

The War Crimes Act

Although the United States’ engagement with international war crimes prosecution is complex, under statutory  law, U.S. nationals can be held domestically accountable for war crimes. The War Crimes Act, passed in 1996 by unanimous consent in the Senate and voice vote in the House, criminalizes a range of conduct constituting “grave breaches” of the Geneva Conventions, when committed by U.S. nationals or members of the U.S. armed forces. The scope of the WCA is significant: conduct committed overseas is not exempt from prosecution under U.S. law if the perpetrator is an American national.

Conduct by U.S. citizens and dual nationals in Gaza, like firing on civilians, could constitute a “grave breach” under the Geneva Conventions. This offers a legal basis for the Department of Justice to open WCA investigations against American subcontractors with the Gaza Humanitarian Foundation, or citizens serving in IDF units plausibly to be committing war crimes.

The DOJ should immediately create a War Crimes Task Force, which would actively investigate credible allegations of war crimes committed by U.S citizens subcontracted by the Gaza Humanitarian Foundation, along with American-Israelis serving in the IDF. This task force should have expertise in international humanitarian law, open-source forensics, and conflict-zone investigations. This step would make clear that American citizenship is not a means to evade responsibility, and would allow the U.S. government to enforce the War Crimes Act to help prevent future impunity. Such an effort would also likely clear the path for a long-overdue accounting of war crimes committed by people serving in the US military, such as the 151 cases uncovered by The New Yorker and the Pulitzer Center in 2024.

Disarming Dangerous ‘Allies’

While the War Crimes Act allows for charges to be brought against perpetrators, the State Department and Department of Defense are obliged under the Leahy Laws to prevent continued military aid to foreign military units “where there is credible information implicating that unit in the commission of gross violations of human rights”. There is potential for overlap: if an American is serving in a foreign military unit committing abuses, they could be charged under the War Crimes Act, and their unit should be flagged for rigorous vetting under the Leahy Laws, though the crimes need not overlap for either measure to be useful.

The Leahy Laws offer another tool to prevent the use of American weapons in human rights abuses. Initially passed in 1997, and expanded/reaffirmed since, the Leahy Laws prohibit U.S. security assistance to any foreign security force unit “about which” credible information exists of gross violations of human rights, including but not limited to torture, extrajudicial killing, and enforced disappearance. The Leahy laws prevent specific military aid from continuing to be provided to units found in violation, but the laws do not at present ensure such units are barred from receiving assistance given to the foreign country’s military as a whole, after which the distribution of that aid makes it non-traceable, and can leave it in the hands of specific units that violate human rights. Patrick Leahy, the former Senator whose name the Leahy laws bear, argued in spring 2024 that the laws should be applied to Israel. “Unlike for most countries,” he wrote, “U.S. weapons, ammunition and other aid are provided to Israeli security forces in bulk rather than to specific units. The secretary of state is therefore required to regularly inform Israel of any security force unit ineligible for U.S. aid because of having committed a gross violation of human rights, and the Israeli government is obligated to comply with that prohibition.”

To date, the Leahy Laws have been used to impede funding to suspect units in Colombia, Pakistan, Egypt, Ukraine, and elsewhere, but as Leahy himself noted, since “the Leahy law was passed, not a single Israeli security force unit has been deemed ineligible for U.S. aid, despite repeated, credible reports of gross violations of human rights and a pattern of failing to appropriately punish Israeli soldiers and police who violate the rights of Palestinians.”

Congress and State should end that double standard. Accountability and integrity under US law demands that the Leahy conditions should be upheld in every instance, even and especially when friends and allies commit war crimes. 

Daniel Raab’s Israeli Sniper Unit 202 should be subject to rigorous Leahy Law vetting, reasserting that U.S. military assistance—training, intelligence, equipment, etc.— can and will be withdrawn on credible allegations of unlawful attacks. Other units responsible for war crimes, like the targeting of civilians, should be identified in the open source or news reporting and similarly be made ineligible for U.S support under the Leahy Laws.

Policy Recommendations 

Congress must do its job to ensure oversight and transparency. It should require the State Department to make regular public reports on investigations of U.S. nationals under the War Crimes Act, as well as the results of Leahy vetting. Congress can and should hold oversight hearings to demand the executive branch take action (or explain its inaction) when such violations come to light. The U.S. government should coordinate with U.N. fact-finding missions, NGOs, and international prosecutors and share evidence it has that can be used to corroborate allegations. It should be absolutely clear, through public messaging by the State Department, that U.S. nationals are not exempt from accountability mechanisms for violations committed anywhere.

Enforce U.S. Law on U.S. Nationals who commit war crimes
Increase Leahy Vetting on units seen in open source to be violating human rights
Ensure Oversight, Transparency, and International Cooperation when it comes to withholding arms and prosecuting war crimes

As the second anniversary of October 7 passes, and the tenuous terms of a ceasefire are once again agreed to, the genocide in Gaza remains a humanitarian and legal crisis. American citizens and nationals have been directly implicated in the violence, raising profound questions of accountability. Washington possesses legal tools, like the War Crimes Act, and legally mandated procedural obligations, like arms withholding under Leahy Laws—but has lacked the political and moral courage to utilize them. Enforcing these statutes, such as conditioning aid, arms embargoes and enhancing transparency, are essential steps toward upholding international law, deterring future violations, and ensuring that American citizenship is never misused as a shield for war crimes. 

The United States cannot credibly demand accountability for atrocities in other conflicts (e.g Russian war crimes in Ukraine) while it simultaneously shields American GHF subcontractors and possibly other IDF dual nationals complicit in war crimes in Gaza

If serving in a foreign military is a free pass to immunity, then citizenship, and by extension the law, loses meaning. The U.S. must not create that precedent. The threat, however small, of prosecution or conditioned assistance will have a deterrent effect: knowing that one’s military actions may later have legal and reputational consequences will push compliance with IHL. 

While the present administration may disregard the harm done to national reputation as undermining U.S. strategic interest, unaddressed accusations of grave human rights violations by U.S. citizens abroad carries real diplomatic risk for Washington. Ignoring it erodes trust with allies and partners that expect and demand the U.S. to uphold its own laws. 

Think Big to Rein in the Arms Trade 

John Ramming Chappell is an Advocacy and Legal Advisor at Center for Civilians in Conflict (CIVIC).

The arms trade and its human consequences have had an outsized impact defining U.S. foreign policy in the Trump era. The United States sells more weapons than the next seven countries combined. For progressives, the use of American-made weapons in atrocities in Yemen and Gaza has been at the center of moral and strategic debates. While the grave consequences of U.S. arms sales are indisputable, the policy debate around U.S. arms transfers has been relatively narrow. Progressives are right to interrogate the human and strategic costs of the U.S. arms trade. But when it comes to solutions, they should think bigger.

The Congressional Right To Withhold Arms

The framework governing the U.S. arms trade today is articulated in the Arms Export Control Act, and, to a lesser extent, the Foreign Assistance Act. Under the current framework, the State Department is generally free to sell weapons, or grant licenses to companies to sell weapons, without congressional input, although it may seek legislators’ input as a courtesy or out of respect for norms. For sales exceeding a specified dollar threshold ($50 million for many countries and weapon types), the President must inform Congress of the proposed sale before entering into an agreement or granting an export license to a company for the weapons in question. Congress nominally has an opportunity to block a notified sale through a joint resolution of disapproval using privileged procedures that allow any senator to seize floor time. 

In practice, blocking a sale requires a two-thirds majority in both the House and the Senate, and Congress has never managed to block a sale in this way. In addition to the basic structure of the congressional-executive relationship, Congress has enacted laws to prohibit arms sales to specific countries or countries that meet specified criteria, although implementation of the latter falls to executive branch officials.

Solutions Distilled
Congress should reintroduce the National Security Powers Act and National Security Reforms and Accountability Act, or other legislation to reassert congressional authority over arms sales, and work toward a mark-up.
Advocates and researchers should connect crises to the structures that made them possible.
Presidential aspirants should commit to working with Congress to overhaul the arms sales framework.
Ultimately, arms sales should be guided by a “first, do no harm” ethos.

This framework is not inevitable. Under the Foreign Commerce Clause of the U.S. Constitution, all authority to regulate the arms trade belongs to Congress. It has since delegated much of that authority to the President, but Congress could reclaim it by passing legislation at any time. This allocation differs from constitutional war powers, for example, which are shared between the president and Congress. There is comparatively little debate about the constitutional separation of powers over arms sales because it is clear-cut and in Congress’s favor. Due to Congress’s expansive power to regulate arms export, the possible arrangements for an arms exports framework are extensive.

Since the earliest days of the Constitution, Congress has passed laws to control and limit arms sales in the national interest. Congress prohibited all arms exports from 1794 to1795 and 1797 to1800. General prohibitions also took effect during the American Civil War and World War I. An 1898 law authorized the president to block exports of war materiel to Spanish territory during the Spanish-American War, and a law enacted in 1912 and then amended several times thereafter allowed for arms embargoes on Latin American countries during civil unrest. Under the Neutrality Act of 1935, a mandatory arms embargo applied to all countries engaged in interstate war. Since the 1970s, Congress has enacted universally applicable prohibitions on arms sales based on human rights and humanitarian criteria. Congressionally mandated country-specific embargoes have applied to dozens of countries since the 18th century.

Rifling Through The Recent Past

For the arms industry, arms sales bureaucrats at the State Department, and hawkish legislators, the problem with U.S. arms sales is that they are too few and too slow. In collusion with the Trump administration, House Republicans are pushing for fewer sales to require congressional notification and to make it easier to rush arms sales through the State Department faster. 

Advocates for accountability in the arms trade have mostly offered proposals that tinker at the edges of the existing arms export control framework. Legislators have sought the faithful implementation of restrictions already on the books. The top Democrats on the congressional foreign affairs committees have proposed a package of reforms to close loopholes and introduce some new restrictions within the existing framework. A new bill from Rep. Sara Jacobs and other House Foreign Affairs Committee Democrats would require a system to track when American-made weapons are used to harm civilians or violate international law. These are much-needed efforts that, if successful, will lead to a better arms export system that better protects civilians and promotes accountability for human rights abuses. But they do not change the more fundamental issues of congressional acquiescence and presidential overreach.

Attention to the arms trade in the Trump and Biden administrations has centered on the use of U.S. weapons in atrocities in Yemen and Gaza. In Yemen, the Saudi-led coalition used precision-guided munitions from the United States in attacks that killed civilians. In August 2018, a Paveway bomb manufactured by General Dynamics in Texas killed at least 26 children when it hit a school bus in Dhahyan, Yemen. The war also plunged Yemen into the world’s worst humanitarian crisis. Between January 2017 and August 2020, the State Department approved 4,221 arms sales to Saudi Arabia even as evidence mounted of atrocities by the Saudi-led coalition. Criticism of the US-Saudi relationship mounted after the murder of Jamal Khashoggi in October 2018, and congressional efforts to pass joint resolutions of disapproval followed. In July 2019, President Trump vetoed three such resolutions that Congress passed on a bipartisan basis. As a presidential candidate, Joe Biden pledged to make Saudi Arabia a pariah, and he announced early in his presidency that the United States would no longer sell “offensive” weapons to the kingdom. But by 2022, he visited Crown Prince Muhammad bin Salman in Jeddah, and in 2024, he lifted the ban on offensive weapons exports, citing improvements in Saudi civilian protection practices. The system that allowed sales to Saudi Arabia to continue throughout the Trump years remains intact.

Since October 2023, debates around the U.S. arms trade have centered on Gaza. After Hamas’ October 7 attacks in Israel, the Israeli government mounted a bombing campaign in Gaza that has killed at least 64,700 Palestinians. Investigators have repeatedly confirmed the use of American bombs in war crimes. The United States delivered 90,000 tons of arms to Israel from October 7, 2023 to May 2025 and provided almost $22 billion in taxpayer funds to Israel. Congressional attention has often focused on Israeli authorities’ sustained restrictions on humanitarian aid deliveries to Gaza, which have subjected Palestinians in Gaza to famine. U.S. law prohibits arms sales to countries that restrict U.S. humanitarian aid, but the Biden administration refused to apply that law or to cut off aid to any Israeli military unit under the Leahy law, which bans military aid to units that have committed a gross violation of human rights. 

Under congressional pressure, the Biden administration issued a policy on February 24, 2024 requiring assurances that the Israeli government would facilitate humanitarian access and use weapons in compliance with international humanitarian law. But when it came time to report on the credibility of those assurances and Israeli eligibility to continue receiving U.S. arms in May 2024, the Biden administration concluded that weapons sales to Israel could continue. During his presidency, President Biden restricted U.S. transfers of 2,000-pound bombs to Israel and held up a sale of rifles likely intended for settler militias, but otherwise kept weapons flowing. President Trump has reversed all restrictions and endorsed the forced displacement of Palestinians out of Gaza. During both the Biden and Trump administrations, Senator Bernie Sanders has forced three sets of joint resolution of disapproval votes on U.S. arms sales to Israel, with the most recent garnering support from a majority of the Senate Democratic caucus.

In the Yemen and Gaza arms debates, Congress criticized U.S. sales to Saudi Arabia and Israel without revisiting the structures that made U.S. complicity in atrocities possible. In the context of an ongoing rule-of-law crisis, Congress cannot trust the president to faithfully implement the law.  

This isn’t the first time the United States has experienced a rule-of-law crisis centering on arms sales – the Iran-Contra affair revolved around the Reagan administration’s efforts to evade a congressional ban on arms transfers to the Nicaraguan Contras. The congressional investigation that followed barely scrutinized the administration’s violation of the law, focusing instead on illicit transfers of funds. The ring leaders of the conspiracy thrived despite their involvement in the scandal, with Reagan’s vice president, George H.W. Bush, being elected president soon after.  

The human costs of U.S. arms sales have rarely drawn as much public attention as they do today. Preventing harm in the future requires looking at the big picture. Vigorous public debates about the U.S. arms trade should induce legislators to revisit the first principles of the arms export framework. Under the current arms export control framework, the game is rigged in the White House’s favor, despite Congress having all the constitutional power. Congress seems to have forgotten that it can rewrite the rules. They have tried before and should do so again.

The first version of the Arms Export Control Act that Congress passed would have created a very different system than the one that exists today. In 1975, Senator Hubert Humphrey introduced the International Security and Arms Export Control Act “one ofs his crowning final achievements,” according to his aide and later New Mexico governor Bill Richardson. The bill included a $9 billion annual ceiling on arms sales, a prohibition on security assistance to governments violating human rights, and required congressional approval for military advisory missions after 1977. President Ford vetoed it, forcing Congress to introduce a diluted version that forms the basis for the arms export oversight framework today. Some principles from Senator Huphrey’s original bill were incorporated into President Carter’s 1977 conventional arms transfer policy. But implementation of the policy was lackluster, and President Reagan quickly rescinded it upon entering office.

Long before he became the face of unconditional arms sales to Israel, Senator Joe Biden championed an effort to restructure the arms trade. INS v. Chadha, a 1983 Supreme Court case, had effectively raised the congressional threshold to block an arms sale through a joint resolution of disapproval from a simple majority to a two-thirds supermajority. Soon after, President Reagan vetoed a joint resolution of disapproval to block a major arms sale to Saudi Arabia, and the Senate failed to overturn a presidential veto. Before Chadha, a veto would not have been possible. Concerned about Chadha’s implications for congressional oversight of the arms trade, Senator Biden introduced a bill to require affirmative congressional approval for major arms sales to countries other than NATO members and a few other allies. The Reagan administration and the arms industry opposed it.

In 1993, Sen. Mark Hatfield and Rep. Cynthia McKinney introduced the Code of Conduct on Arms Transfers Act as part of an international, civil society-driven effort to rein in the arms trade. In addition to requiring a code of conduct, the bill only allowed arms sales to countries if the president could certify that they promoted democracy, respected human rights, did not engage in aggression, and participated in the United Nations Register of Conventional Arms. Unlike more common waiver provisions, the president had to request exemptions from Congress, which had to affirmatively vote.  

The most recent legislative effort to really shift the distribution of arms transfer authorities between Congress and the president came in the Senate National Security Powers Act and House National Security Reforms and Accountability Act, which adopted an updated version of the Biden approach in its arms sales-focused portion. Dubbed the “flip the script” approach to arms sales, the bill would restore congressional authority for controversial sales according to item and recipient, while allowing less-risky sales to continue. The sticking point for the bill, if it is reintroduced, will be Israel. The National Security Powers Act used a list of countries already in the Arms Export Control Act to determine which countries would be eligible for sales without express congressional authorization. That list includes Israel, and so the National Security Powers Act, as last introduced, would allow sales to Israel without express congressional approval. In light of the Israeli government’s pattern of atrocities, it should not receive this privilege if the bill is reintroduced.

Ban Bombs Better

An arms export framework has to answer four questions:

• What can the president do alone, without telling Congress?
• What does the president have to tell Congress? 
• What does the president have to ask for specific authority to do?
• What does Congress prohibit the president from doing? 

These days, most debates focus on implementation of laws pertaining to the final question, like the Leahy Law and Section 620I of the Foreign Assistance Act. But when it comes to the balance between Congress’s power and the president’s, the first and second questions are most important.

Congress has debated these questions before, most intensely in the 1930s as it reckoned with the role of the arms industry in World War I and again in the 1970s as it reasserted congressional authorities after Watergate and the American wars in southeast Asia. 

Since those 1970s-era reforms, congressional authority has slowly eroded. The first warning sign came in INS v. Chadha. Today, the erosion is only accelerating. As a workaround to Chadha, the process of informally notifying congressional committees of major arms sales before an official notification has long allowed committee leadership to hold up major arms sales to ask questions or request modifications. But the Trump administration has repeatedly overridden these holds. Meanwhile, the White House and congressional allies are pushing to require notification for fewer arms sales and reduce the opportunities that legislators have to question proposed sales. For decades, the Arms Export Control Act’s joint resolution of disapproval mechanism has been described as a deterrent to harmful arms sales, despite the practical impossibility of actually enacting such a resolution. Soon, that deterrent effect will be null. For progressives to meet the moment, they need to change the terms of the debate. 

With little prospect of enacting meaningful legislation in this Congress, legislators and advocates should envision a different system to promote accountability, human rights, and international law and develop a consensus around that vision. Congress should be working towards a reclamation of their constitutional authority and an end to the system that allows the president to sell weapons to war criminals and human rights abusers with impunity.

Military AI Challenges Human Accountability

Davit Khachatryan is an international lawyer and lecturer focusing on the intersection of armed conflict, emerging technologies, and international law. 

Artificial intelligence is no longer confined to code-stained labs or military contractors’ slideshows: it has become a regular presence on modern battlefields. In 2024, as Israeli analysts relied on tools like Gospel and Lavender to generate targeting lists, the Pentagon set out to deploy swarms of autonomous drones through its Replicator Initiative. Targeting algorithms (AI systems analyzing data to identify and prioritize military targets) now compress the decision cycle from days to minutes, sometimes seconds, fundamentally challenging the way law, ethics, and accountability operate in armed conflict. It was in response to these realities that, on December 24, 2024, the UN General Assembly adopted Resolution 79/239, affirming that international humanitarian law (IHL) applies “throughout all stages of the life-cycle of artificial intelligence in the military domain” and calling for appropriate safeguards to keep human judgment and control at the heart of military decision-making.

But resolutions and declarations, while necessary, do not themselves restrain machines. The responsibility for lawful conduct must remain anchored in human actors: commanders, engineers, and political authorities. Algorithms, after all, have no legal personality; they cannot form intent, stand before a court, or bear the weight of tragedy or blame. This is why the real task for military commanders, policymakers, and legal advisers is about translating the timeless obligations of the laws of war into practices and workflows that keep the chain of accountability intact, even as machines accelerate the tempo of armed conflict beyond anything imagined by those who first wrote those rules.

Every new AI system deployed for military purposes must be subject to a recurring legal review, with transparent records.
Embedding a responsibility matrix within the metadata and operational logs of each AI system would make it possible to trace faults back to their source.
Require rigorous adversarial testing of every AI tool before fielding.
Build safeguards into the technology, doctrine, and organizational culture that guide its use.

The question, then, is whether states are willing and able to build safeguards so that, even as decisions speed up and control becomes diffuse, a human being remains at the end of every algorithmic chain of action.

What is a Military AI?

Ask three officers to describe what counts as AI in uniform, and you will likely hear three different answers. One will mention software that sorts satellite imagery, another will point to a drone that selects its flight path, and a third may describe a logistics program that determines which convoy moves first. All of them are correct because military AI is a broad spectrum of software-enabled capabilities that touch nearly every corner of modern operations.

At one end of this spectrum are decision-support algorithms. These tools sift through immense volumes of data and present patterns or anomalies for a human to review. They normally remain firmly subordinate to human choice; nothing happens until a commander or operator approves the recommendation. Further along are autonomous platforms that can steer themselves, prioritize targets, and in some cases, use weapons or make lethal decisions without direct oversight. Both adapt and learn from experience.

This capacity for continual learning is why military leaders are drawn to AI and also why lawyers are cautious. As these systems become more complex and more adaptive, it becomes significantly harder to demonstrate that every use of force still complies with IHL. The capacity to process information at machine speed promises new efficiencies and tactical advantages. It also threatens to outpace the ability of humans to scrutinize and override what the machine proposes. The more sophisticated the system, the greater the challenge of ensuring that its operation does not slip beyond the reach of law or human conscience.

Distinction, Proportionality, Precaution, and Indiscriminate Attack

At the heart of International Humanitarian Law are a handful of principles that every commander must observe in the conduct of hostilities, regardless of the technology at their disposal. The rule of distinction requires that attacks be directed at combatants and military objectives. This obligation reaches back to the design and validation of algorithms. If the data used to train a targeting model is biased, incomplete, or outdated, there is a real risk that the system will misclassify a school as an ammunition depot or mistake a civilian vehicle for a military convoy. Effective distinction, then, depends not only on rigorous data hygiene, continual red-team testing, and the capacity for the system to express its confidence in a way that human commanders can understand and question. When an algorithm cannot explain its reasoning, or when its output cannot be interrogated by a human, the legal requirement of distinction is at risk.

Proportionality is the next pillar. Even when a target is lawful, an attack is forbidden if the expected harm to civilians would be excessive compared to the anticipated military advantage. AI magnifies the challenge: it can process immense quantities of data and recommend actions at speeds that compress human decision-making to mere seconds. The speed can encourage a dangerous automation bias, where commanders are inclined to trust the machine’s judgment without fully weighing the consequences. To meet the proportionality requirement, there must be a clear, understandable record of what information the system used, what options were considered or rejected, and how the balance was struck between expected military advantage and potential civilian harm.

Precaution demands that every feasible step be taken to spare civilians and civilian objects before, during, and after an attack. In the context of AI, this means not only reviewing weapons before their use, but also conducting continual reassessment as software evolves or as new data and sensors are incorporated. This kind of legal review is often called an “Article 36 review,” referring to the provision of Additional Protocol I to the Geneva Conventions. That article requires states to determine whether any new weapon, means, or method of warfare they develop can be used consistently with international law. Even countries that are not parties to Additional Protocol I often conduct similar reviews. Effective precautions also require tamper-resistant records of every input and decision, so that when failures occur, they can be reconstructed and learned from. In uncertain conditions, systems must default to conservative modes, such as surveillance only, rather than risk unintended harm through automated action based on outdated or corrupted data.

Finally, IHL prohibits indiscriminate attacks. Any use of force that cannot be reliably confined to military objectives or that is likely to strike civilians and civilian objects without distinction is forbidden. AI promises new levels of precision, but it also presents new risks if boundaries are not clearly defined and tested. The only way to prevent this is through hard-wired limits on where and when systems may operate, relentless stress-testing under a wide variety of conditions, and the retention of a genuine human veto at every stage.

In sum, IHL principles remain in force, but their practical application now depends on embedding those rules into the very architecture and operation of AI. 

Speed, Opacity, and Proliferation

AI exposes points of friction that IHL rules were never designed to anticipate. When AI is integrated into the targeting cycle or command and control networks, it can compress the decision-making process from hours to seconds. Early warning systems may communicate directly with automated defenses, and predictive algorithms can recommend preemptive action before any human has fully grasped the situation. The space for reflection, deliberation, and legal review narrows dramatically, and the traditional safeguards built around time for human intervention may vanish. 

Opacity presents an equally serious challenge. Unlike conventional weapons or even most traditional software, AI often operates as a black box, producing outputs that even its creators cannot fully explain. When models are trained on synthetic or computer-generated data, or proprietary protections prevent independent scrutiny, the ability of lawyers and commanders to review, test, or question the system is severely limited. Under such conditions, the concept of a one-time weapons review loses much of its meaning, and the burden shifts to continuous, in-depth monitoring and oversight, tasks that are often difficult to sustain.

Proliferation is the third critical fault line. The trained neural weights, code, and the operating concepts of many military AI systems can be transmitted anywhere in the world in seconds. A commercial drone can be upgraded into a strike or reconnaissance platform with a software patch delivered by email, by repurposing its mission, or, if heavy enough, by using it as a primitive weapon, such as crashing it into a target. Thinking more creatively, these drones can be launched in coordinated swarms to overwhelm air defenses. As these technologies spread, and as militaries operate alongside coalition partners using different systems trained on different data and following different logic, the risk grows that the seams between systems will become legal blind spots. 

Keeping Humans in Command

The purpose of IHL is to protect people and limit suffering during armed conflict. To achieve this, the law is written to make sure that responsibility for the use of force rests with human beings, not with machines. This cannot be maintained with the promise of meaningful human control as an abstract principle. 

First, every new AI system deployed for military purposes must be subject to a recurring legal review. A record of these reviews, maintained transparently and with clear signatures from legal, technical, and operational authorities, would ensure that no system enters the field without documented human oversight and an unbroken chain of responsibility.

Second, the architecture of responsibility must run through the entire life cycle of each system. From the earliest stage of data collection and model training, through to deployment, field use, and after-action review, every layer should be linked to identifiable individuals or teams who are empowered to act and accountable for their choices. Embedding this kind of responsibility matrix within the metadata and operational logs of each system makes it possible to trace every decision and intervention back to its source, even years later, if a failure must be investigated.

Third, no AI tool should be fielded until it has been subjected to rigorous adversarial testing. Red-team exercises, designed to probe for bias, vulnerabilities, and failure modes, must become a routine part of military procurement and certification. Where deficiencies are found, the system should be withheld from deployment until those risks are resolved. This process must be a core responsibility of states and their armed forces.

Finally, safeguards must be built not only into the technology but into the very doctrine and organizational culture that guide its use. Preauthorized defensive systems can be kept within tightly defined geographic and temporal boundaries, while time-critical operations should still require streamlined but explicit human approval. Strategic or high-consequence strikes must always retain full deliberative review. Coalition operations and multinational partnerships need common standards and protocols so that interoperability does not become a backdoor for legal evasion. These measures are the living expression of the law’s demand that there is always a human face and a human name at the end of the chain of action. 

Governing machines

AI is already changing the face of war. Its speed, scale, and adaptability have the potential to transform the conduct and the ethics of armed conflict, presenting risks as profound as its promised advantages. Yet, it is people, political leaders, military personnel, engineers, and lawyers who remain responsible for the choices that machines enable.

Resolution 79/239 is a clear assertion that IHL foundations must not be surrendered to the logic of the algorithm. The task ahead is not to demonize artificial intelligence, nor to place our hopes in technical fixes alone, but to ensure that the rules of war are translated into new domains and that the structures of oversight are robust enough to keep responsibility where it belongs.

If we succeed, AI may yet deliver on its promise of greater precision and restraint. If we fail, we risk allowing the tempo of technology to outrun the reach of law. In the end, the most important question is not what our machines can do, but whether we have the resolve and imagination to govern them, so that, even as the future unfolds at the speed of code, the chain of accountability remains unbroken.

Expanding Use of Emergency Arms Authorities Requires More Congressional Oversight

Janet Abou-Elias is a research fellow at the Center for International Policy and co-founder of Women for Weapons Trade Transparency.

In late May, the Government Accountability Office (GAO) released a quietly damning report on the Presidential Drawdown Authority (PDA) – a statutory tool that allows the president to transfer defense articles and services from U.S. stockpiles to foreign partners without advance congressional approval. The findings show that in the rush to meet wartime needs, the Department of Defense (DoD) has repeatedly neglected basic statutory safeguards intended to protect U.S. force readiness and fiscal discipline.

Yet while the report’s spotlight is on Ukraine, another drawdown mechanism is operating with even less public accountability: the War Reserve Stockpile Allies–Israel (WRSA-I) program. Both PDA and WRSA-I reflect a concerning drift away from the core principles of legislative oversight and budgetary transparency in U.S. security assistance.

Congress should act on the GAO’s findings by:
Mandating O&M assessments for all PDA drawdowns and requiring public reporting on their outcomes;


Applying equivalent oversight standards to WRSA-I, including disclosure of drawdown quantities and conditions, and reinstating valuation thresholds;


Establishing centralized tracking mechanisms across DoD to monitor cumulative impacts on U.S. stockpiles, regardless of the drawdown authority used;


Codifying replenishment requirements, particularly for frequently used programs, to avoid ad hoc budgeting that undermines strategic planning.
The Oversight Shortfalls in PDA

Since the start of Russia’s full-scale invasion of Ukraine, the United States has used PDA to transfer more than $31.7 billion in arms to Ukraine, as well as over $1 billion to Taiwan and Haiti. The GAO found that the DoD failed to conduct required Operations and Maintenance (O&M) budget impact assessments for 21 of the fiscal year 2024 PDA packages it reviewed, highlighting a persistent gap in oversight despite the scale of transfers authorized.

These assessments are not optional. They are required under the law to evaluate how pulling from U.S. stockpiles might affect military readiness, sustainment, and operations. The Pentagon has been on notice since at least 2016, when GAO first recommended that military services develop clear guidance for performing such evaluations. Nearly a decade later, that guidance remains nonexistent.

The report also confirms that replacement funding for PDA drawdowns is not guaranteed as it depends on discretionary congressional appropriations. While Congress has so far allocated over $45 billion to replenish what was sent to Ukraine, this process is ad hoc and susceptible to political delays. The absence of required impact assessments means policymakers are often voting on replacement funds without a clear understanding of operational tradeoffs or baseline stockpile metrics.

WRSA-I: A Parallel Drawdown Program with Even Less Scrutiny

While PDA has rightly come under scrutiny, another statutory mechanism has largely escaped public debate: the War Reserve Stockpile Allies–Israel (WRSA-I). Originally intended to pre-position U.S. materiel for contingency use in the Middle East, WRSA-I has, over time, evolved into a quasi-permanent drawdown channel for direct transfers to Israel, especially during military escalations.

Between 2023 and 2024, U.S. officials authorized multiple shipments from WRSA-I to Israel during its military attacks on Gaza. Unlike PDA, which requires at least limited congressional notification, WRSA-I transfers occur with almost no reporting obligations to Congress or the public. As a result, we know very little about what munitions were transferred, how often, and what consequences – logistical, operational, or political – they may have produced.

This lack of transparency is not incidental; it is the product of a decade of statutory dilution. Legislative amendments in the 2014 and 2021 National Defense Authorization Acts have gradually lowered the oversight threshold for WRSA-I by relaxing valuation limits and broadening eligibility for use. These changes were not widely debated and were often embedded in larger omnibus legislation.

The program’s legal scaffolding has shifted in ways that increasingly bypass both congressional oversight and traditional foreign aid processes. While framed as a logistical asset, WRSA-I now functions in practice as an off-budget arms channel – one that, like PDA, circumvents Foreign Military Financing (FMF) and arms export controls that would normally trigger review by congressional committees or the public.

A Broader Institutional Concern

The parallels between Presidential drawdowns and the use of  War Reserve Stockpile for Allies- Israel reflect a broader institutional pattern: U.S. security assistance is increasingly moving through emergency or pre-authorized mechanisms that operate outside of standard deliberative and oversight processes. In both cases, the executive branch has taken advantage of flexible authorities in the name of responsiveness – while neglecting the due diligence, reporting, and budgeting that Congress has required by law.

This is not just a bureaucratic failure. It raises fundamental constitutional concerns about the erosion of Congress’s Article I powers to authorize spending and oversee foreign military engagements. When weapons are transferred through opaque stockpiles or drawn down without proper assessments, it undermines the transparency and accountability that are essential to democratic control of the military.

Recommendations

Congress should act on the GAO’s findings by:

  1. Mandating O&M assessments for all PDA drawdowns and requiring public reporting on their outcomes;
  2. Applying equivalent oversight standards to WRSA-I, including disclosure of drawdown quantities and conditions, and reinstating valuation thresholds;
  3. Establishing centralized tracking mechanisms across DoD to monitor cumulative impacts on U.S. stockpiles, regardless of the drawdown authority used;
  4. Codifying replenishment requirements, particularly for frequently used programs, to avoid ad hoc budgeting that undermines strategic planning.

Absent these reforms, programs like Presidential Drawdown Authority and War Reserve Stockpile for Allies- Israel risk becoming permanent fixtures of a shadow aid architecture – one that enables short-term arms transfers at the expense of long-term democratic oversight.

MSNBC: Trump’s ‘Golden Dome’ system is an expensive way to make America less safe

At MSNBC, Chief Editor Kelsey D. Atherton walks through how Trump’s recent announcement of a “Golden Dome” missile defense system is an expensive investment in insecurity.

“If missile defense works as promised,” writes Atherton, “it creates an opportunity for the leadership of the protected country to launch nuclear strikes without fear of suffering nuclear retaliation in return. This is true even if missile defense does not actually work as a defense, because overcoming planned defenses means building a larger arsenal and possibly taking a gamble on launching a nuclear first strike, rather than forever losing that deterrent effect.”

Read Trump’s ‘Golden Dome’ system is an expensive way to make America less safe at MSNBC.

How the India-Pakistan Crisis Became a Profitable Spectacle Online

Dr. Fizza Batool is an Assistant Professor at SZABIST University, Karachi, with expertise in South Asian Studies and Comparative Democratization. Connect with her on LinkedIn or read her works on ResearchGate

In late April 2025, a terrorist attack in Pahalgam in Indian-administered Kashmir led to escalating tensions between India and Pakistan. After four days of intense conflict, a ceasefire was agreed on May 10, 2025, following US diplomatic intervention. This conflict wasn’t just fought on diplomatic and strategic fronts—it was fiercely contested in the digital space, where the battle continues even after military tensions have cooled. 

The crisis revealed a profound shift in the information hierarchy, with YouTube channels and TikTok accounts often generating more engagement than official government statements or traditional news broadcasts. Piers Morgan’s viral debate featuring social media influencers alongside traditional experts perfectly captures this new reality— individuals who once would have been mere spectators now sit on equal footing with government representatives and veteran journalists in shaping public understanding of international conflicts.

Behind this digital conflict lies a troubling economic reality: content creators on both sides of the border have transformed geopolitical tensions into a profitable business model. While their strategies differ dramatically as per the socio-political context in each country—Pakistani creators predominantly use humor and satire, while Indian counterparts amplify nationalist narratives—both narratives operate within the same attention economy that rewards emotional engagement over factual accuracy. 

American policymakers have a responsibility to develop guidelines on crisis content monetization that recognize platform design choices have real geopolitical consequences.
The U.S. government should also fund partnerships with trusted civil society actors in South Asia, who can identify emerging digital threats and promote conflict-sensitive reporting standards without compromising press freedoms.
Platforms could take responsibility, voluntarily implementing cooling off periods to calm algorithmic amplification during a crisis, and provide transparency on how crisis-related content was monetized and promoted.

As platforms and content creators profit from conflict-driven engagement, serious questions emerge about algorithmic ethics during international crises and the responsibility of technology companies that transform regional instability into shareholder value. U.S. policymakers must confront this reality, as their oversight of Silicon Valley platforms gives them unique leverage to address digital conflict escalation between nuclear-armed neighbors.

Divergent Digital Narratives with Same Economic Objectives

The digital landscapes of India and Pakistan during the post-Pahalgam crisis reveal distinct cultural approaches to conflict representation. Content creators on both sides have monetized geopolitical tensions, but through markedly different strategies reflective of their sociopolitical environments.

Pakistani digital content displays a tendency toward self-referential humor that transforms domestic vulnerabilities into satirical content. This approach serves multiple functions: psychological protection, social cohesion, and strategic communication. The casual dismissal of war threats through humor represents not naivety but a deliberate coping mechanism in a society accustomed to various crises. Paradoxically, while this satire often critiqued internal governance, it simultaneously bolstered national unity during the conflict, with Pakistanis celebrating their pro-peace stance against Indian war mongering. Content creators leverage this inclination to use wit against war to generate shareable content that attracts cross-border engagement. 

Indian social media creators, in contrast, cultivated content centered on nationalist sentiments. YouTube channels, X handles, and Instagram accounts became amplifiers for outrage and misinformation. Misleading videos, photoshopped images, and hashtags advocating war flooded timelines, many originating from suspicious accounts, suggesting coordinated disinformation campaigns. In this environment, individuals advocating for peace faced significant backlash. For example, actress Dia Mirza’s call for peace amidst escalating tensions was met with severe criticism, with detractors accusing her of undermining national security.

To understand this difference, one must look closer at the civil-military equation in both countries. In India, the military action appeared to have strong civilian leadership approval and media endorsement, creating alignment between government policy and public narratives. In Pakistan, citizens’ responses navigated a more complex relationship with their government and military, where jokes contained a critique of both Indians and the failures of their own governance. While the conflict has gradually transformed the military’s image among citizens from a controversial political actor into a respected defensive shield against external threats, the meme culture continues to counter Indian nationalist content.

For instance, when Indian media falsely claimed that Karachi port had been destroyed and Indian social media accounts celebrated this fictional victory, Pakistanis in Karachi responded not with indignant denial but with sardonic humor. Residents shared images of potholed streets with captions suggesting India cannot destroy a city already destroyed by misgovernance, while others warned Indian forces that the attacking ship could lose “sideview mirror and audio unit” and “Indians should not bring their phones while attacking,” referring to Karachi’s street crime. This response demonstrates how Pakistani content creators transform actual lived experience into narrative countermeasures that simultaneously acknowledge domestic failures while undermining enemy propaganda. In contrast, the Indian celebration of a non-existent military success reveals how nationalistic content ecosystems can create closed information environments where emotional satisfaction supersedes factual verification, a pattern that generates high engagement metrics but potentially undermines strategic decision-making and public trust in the long term.

Despite their different stylistic approaches, content creators on both sides operate within identical economic structures. Platform algorithms reward emotional intensity and engagement regardless of the emotional valence. Pakistani humor and Indian nationalism both trigger high engagement rates, extended viewing sessions, and active comment sections—all metrics that directly influence creator compensation and visibility.

The Profit Machine Behind Conflict Content

The economic infrastructure of digital content creation has fundamentally altered how conflicts are presented to the public. Traditional media outlets covering international conflicts operated within established journalistic frameworks—editorial oversight, fact-checking processes, and institutional reputations that required maintenance through adherence to reporting standards. These organizations typically employed correspondents with subject matter expertise and followed publication cycles that allowed time for verification and context-building.

Digital content creation has democratized conflict reporting in ways that offer certain advantages. Independent voices can now document events from perspectives previously excluded from mainstream narratives. The immediacy of digital platforms enables real-time updates during rapidly evolving situations, and diverse viewpoints can emerge without passing through traditional media gatekeepers. This has created space for counter-narratives that challenge official accounts from both Indian and Pakistani authorities.

However, this democratization has introduced problematic economic incentives that prioritize engagement over accuracy. Digital content creators monetize crises through multiple revenue channels: advertising impressions, channel memberships, donations during live streams discussing military developments, and merchandise featuring nationalistic messaging. Unlike salaried journalists, these creators’ incomes directly correlate with metrics like views, watch time, and interaction rates—all of which increase with emotionally provocative content.

This profit-driven model creates what media analysts term an “engagement trap”—a situation where financial incentives reward emotional intensity rather than factual integrity. Platform algorithms systematically favor videos generating strong responses, longer viewing sessions, and higher comment rates—metrics that conflict-driven content reliably produces. Consequently, creators face economic pressure to develop increasingly dramatic narratives that may amplify tensions rather than promote nuanced understanding or peaceful resolution.

Empirical studies acknowledge that algorithm-driven recommendations systematically favor emotionally provocative content over factual reporting. While content promoting sadness and anger dominates, there is ample evidence that humorous content, particularly sarcasm, can also generate higher digital engagement. It has been reported that users who express negative emotions are consistently fed increasingly aligned content. This reinforcement intensifies over time and persists across different contexts, indicating the algorithm’s role in creating emotional filter bubbles. 

The economic structure creates a troubling reality: platforms financially benefit from regional instability. Shareholder value increases with engagement metrics, creating corporate incentives misaligned with conflict resolution or accurate information dissemination. This dangerous alignment between profit motives and conflict amplification demands urgent intervention through regulatory frameworks, platform accountability measures, and digital literacy initiatives that can disrupt the economics of division before the next crisis erupts.

America’s Digital Diplomacy

The United States played a crucial role in defusing military tensions between India and Pakistan, and it now has both the leverage and responsibility to address the digital conflict that persists. As the headquarters of many major technology platforms are in the United States, it occupies a unique position to influence how these companies handle crisis-related content. American jurisdiction over Silicon Valley giants provides powerful regulatory tools and, therefore, American policymakers have a responsibility to develop guidelines on crisis content monetization that recognize platform design choices have real geopolitical consequences.

The U.S. government should also fund partnerships with trusted civil society actors in South Asia. These independent media watchdogs and digital rights organizations can identify emerging digital threats and promote conflict-sensitive reporting standards without compromising press freedoms. By supporting local expertise, American policy can help prevent digital escalation between nuclear powers, particularly in a crisis.

Meanwhile, digital platforms must also act more responsibly and update their own policies to reflect the role social media too often plays in conflict and human rights abuses. They could implement “cooling periods” during verified international crises. These temporary algorithm modifications would reduce engagement-based amplification of inflammatory content while maintaining information access. Rather than censoring speech, these measures would simply pause the algorithmic rewards fueling digital conflict, giving diplomacy room to function.

Platforms should also provide transparency through special reports during and after international conflicts. These disclosures would detail how crisis-related content is monetized and promoted, creating accountability for companies that profit from regional instability. The public deserves to know when platform economics align against peaceful resolution.

By addressing the profit incentives behind digital conflict narratives, U.S. policy can help create an information ecosystem that serves stability rather than shareholders. The real battle isn’t between the people of India and Pakistan—it’s between an economic model that rewards division and our collective interest in accurate information during crises.

House Armed Services Committee Reconciliation Proposal a Military Contractor and Billionaire Welfare Giveaway

As the House Armed Services Committee takes up its portion of what will become President Trump’s reconciliation bill, the Center for International Policy issued the following statement:

“The House Armed Services Committee (HASC) majority’s proposal to add $150 billion to the Pentagon budget is a clear attempt to funnel public money into the pockets of military contractors and billionaires, disguised as a national security measure.

“The so-called “Golden Dome” would get $24.7 billion. Trump’s fantasy missile shield will fail to make Americans any safer from today’s threats, but succeed in steering billions of dollars in contracts to Trump’s cronies, like Elon Musk’s SpaceX. The HASC proposal also includes $360 million to prevent the retirement of the obsolete F-22 and $1.5 billion for Sentinel ICBMs, one of the more dangerous elements of the plan’s $12.9 billion nuclear weapons spending spree.

“HASC Chairman Mike Rogers (R-AL) claims the arms industry is under-resourced, but the $899 billion in military spending already appropriated for this year exceeds the GDP of all but 20 countries. This proposal would balloon this year’s Pentagon budget to $1.05 trillion.

“Despite increased profits and cash flow, military contractors are spending more of their revenue on cash dividends and stock buybacks and less on R&D and capital expenditures, leaving it to taxpayers to cover those expenses. It’s fitting that this proposal is part of the same bill that will drive further inequality by cutting at least $300 billion in Medicaid funding and slashing other programs that help Americans working to make ends meet. The message of this package is clear: welfare is fine, so long as it’s for corporations and billionaires.”

Trump Executive Order on Defense Sales

The actions outlined Executive Order “Reforming Defense Sales to Improve Speed and Accountability,” issued April 9, 2025, will further obscure a vast portion of the arms trade from public and congressional oversight, undermine Congress’ legally mandated authority over arms sales, and increase the flow of U.S. weapons to actors in armed conflicts and criminal groups without adequate end-use monitoring measures.

Obscuring the arms trade from Congress by raising thresholds for notification:

 

Section 3(a)(iii) of the Executive Order dictates that the Executive “submit a joint letter to the Congress proposing an update to statutory congressional certification (also known as congressional notification) thresholds of proposed sales under the FMS and Direct Commercial Sales (DCS) programs in the Arms Export Control Act (22 U.S.C. 2751 et seq.).”

 

  • At present, under the Arms Export Control Act, arms sales that reach a certain financial value require congressional notification and review. The transaction value triggering notification and the duration of the review period vary depending on the proposed recipient and weapons system in question. 

  • These notifications provide lawmakers an opportunity to weigh in on arms transfers and act as de facto transparency mechanisms in an otherwise opaque enterprise, facilitating public engagement and inter-branch negotiation on security assistance decisions which have massive impact on U.S. grand strategy and public image, as well as human rights and civilian protection.

  • Already, an enormous number of arms sales pass below this notification threshold. Between 2017 and 2020, the Department of State’s Office of the Inspector General found that more than 4,211 below-threshold arms transfers, worth roughly $11 billion, were made to Saudi Arabia and the United Arab Emirates as they bombed Yemen. Raising notification thresholds would increase the number of arms transfers that proceed without congressional review, warping the political risk calculus for sales and incentivizing even more permissive and less restrained practices. 

 

Undermining congressional authority regarding arms sales:

 

Section 3(a)(iii) of the Executive Order further dictates that “The Secretary of State shall also work with the Congress to review congressional notification processes to ensure the timely adjudication of notified FMS and DCS cases.”

 

  • This is likely a reference to eliminating the “tiered review process,” one of the few tools Congress has to exert authority over the transfer of defense articles and services.

  • As the State Department Office of the Inspector General explains “the Department has by longstanding practice submitted a preliminary or informal notification of prospective major arms transfers in advance of their formal notification to the congressional committees of jurisdiction.” This process allows for Congress to ask questions or raise concerns prior to formal notification confidentially with the administration.

  • Because Congress has never successfully passed Joint Resolution of Disapproval to entirely block an arms sale, this is de facto the only opportunity for Congress to exercise its Constitutional oversight role in matters of foreign relations. Since 95% of arms sales are approved by the State Department within 48 hours, this tiered review process only impacts transfers which pose a significant risk to U.S. national security or violations of international law.

 

Increasing the flow of arms to conflict zones, while risking U.S. proprietary information:

 

Section 3(c)(ii) orders “The Secretary of State and the Secretary of Defense shall review and update the list of defense items that can only be purchased through the FMS process (the FMS-Only List).”

  • The United States restricts the sale of some items, including those with advanced capabilities and proprietary defense information, to the FMS (Foreign Military Sales) government-to-government process. Among these defense articles are autonomous weapons systems, intelligence libraries, and select electronic warfare items. Allowing these sensitive weapons to be sold through the Direct Commercial Sales process cuts public oversight, while raising the risk of unauthorized transfer or diversion.

Section 3(a)(ii) similarly orders the Executive to “Reevaluate restrictions imposed by the Missile Technology Control Regime on Category I items and consider supplying certain partners with specific Category I items, in consultation with the Secretary of Commerce.”

  • The Missile Technology Control Regime is an international agreement which seeks to limit the proliferation of weapons of mass destruction and limit the risk of related items reaching violent or destabilizing groups.

  • Undermining the MTCR not only risks the greater proliferation of the equipment that enables the use of weapons of mass destruction, but means other parties to the MCTR, including Russia, Turkey, and India, may follow suit and make the transfer of these risky technologies to U.S. adversaries more likely.

 


For inquiries contact [email protected]

Global Social Media Bans Will Hurt Vulnerable Communities

Anmol Irfan is a Muslim-Pakistani freelance journalist and editor. Her work aims at exploring marginalized narratives in the Global South with a key focus on gender, climate and tech. She tweets @anmolirfan22

In early January, Meta put out a sudden and unexpected announcement that the platform would be ending its third-party fact checking model in the US, saying that their approach to manage content on their platforms had “gone too far.” Instead, Meta will now be moving to a Community Notes model written by users of the platform, similar to X. These changes came amidst other larger changes to the platform’s hate speech and censorship rules which will be applied globally, with the announcement stating that the platform will be “getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.”

Support Alternative Platforms - users can move to platforms like Bluesky and support the AT protocol which are decentralised and challenge the control of the major tech companies
Raise the Right Questions - advocates and governments need to look beyond profits and face value and start asking big tech companies the right questions about why their profits are based on potential harm
Account for Cultural Nuance - governments and international organisations should establish safety protocols and ethical regulations around

But while the announcement focused on the idea of promoting “free speech”, critics pointed out that it didn’t actually detail just how those changes would take place. News outlets like NPR reported that Meta now allows users to call gay and trans people “mentally ill” and refer to women as “household objects and property.” Those are just some of the more obvious changes in a larger shifting power dynamic that over the last year has slowly made it clear that the digital realm is increasingly unsafe. With the monopoly of digital communication and connection in the hands of a few Big Tech platforms, these US based companies like X and Meta have enough power and access across the world to not just impact everyday communication but influence social dynamics and even global politics. Facebook’s facilitation of the Rohingya genocide isn’t new news, but it is an example of how the safeguards these platforms have supposedly had in place for years haven’t been working, and these changes may seek to worsen the situation further particularly for vulnerable groups. 

Is Social Media Becoming More Dangerous?

Across the United States and the world digital spaces already unsafe for many marginalized groups  are predicted to become more exclusionary, and even dangerous in many ways. 

“When people talk about tech policies, when they talk about vulnerable communities they have a very narrow perspective of the US based minority,” says attorney Ari Cohn, who works at the intersection of speech and technology. That excludes the culturally-nuanced and global conversation that is needed to safeguard global vulnerable populations. 

With fewer fact checkers – even in just the US – and lesser controls online, these platforms are creating digital spaces that now account even less for cultural nuances and needs than they did before, which can further endanger people in the Global South. Because these decisions are made by tech company leadership in the US, many vulnerable groups across the world aren’t even factored into the conversation about safety or risk 

“With the tech landscape generally the regular terms we acknowledge or are worried about are non consensual sexual or intimate images, but the definition of intimate is something we need to work around, so for example if we see a picture of a couple is leaked from Pakistan, to Meta it’s just a picture of people holding hands but for us the context will make it different, put those people at risk”, says Wardah Iftikhar, Project Manager at SHE LEADS, which focuses on eliminating online gender based violence.

It’s these cultural nuances and the risks posed to marginalized groups that make it essential to understand just what this push for “free speech” really means. Yael Eisenstat, an American democracy activist and technology policy expert, summarizes the three changes that she says help us understand that these directives aren’t about free speech and risk contributing to more hate and extremism, pointing out that 1, the algorithm on platforms like X favors Elon Musk and the people he prioritizes, 2, previously banned users being let onto the platform, and 3, the new verification systems now prioritizing people who can pay which further skews the power into the hands of people who have money. 

“These changes combined are important because they are the opposite of actually trying to foster free open speech and tilting it towards people willing to pay, or people the owner is willing to prioritize, while at the same time making it clear that they don’t want to while at the same time making it clear that they no longer want to engage with civil society and outside experts,” Eisenstat shares, emphasizing how this disparity increases further in the global south in countries where X/formerly Twitter’s $8 verification fee could mean a significant amount for many people. 

The risk of false, and possibly dangerous information further increases with the move away from fact checking. “If there were a fair community notes system I could see that this could be a better solution than the fact checking, but you have to take it into account that all or most of the community notes in the past which countered a claim, referred mostly to these fact checker organizations and their articles which were paid by meta, and now they’re gone,” says Berlin-based writer and lecturer Michael Seeman whose work focuses on the issues of digital capitalism.

It also further silos users within their own information bubbles online, which can lead to radicalization as well, particularly as Eisenstat points out that in the case of X many of those allowed back on the platform were extremists and white supremacists. Iftikhar, says that social media platforms have the power to let us remain in our silos. 

“For people supporting Palestine they thought everyone was supporting Palestine and people supporting Israel thought everyone was supporting Israel and people in Palestine were being offensive,” she says.  

Big Tech & Global Autocracy

Of course there is the actual shadowbanning on pro-Palestinian that took place across many of Meta’s platforms, which in the larger picture also raises questions about what the future of these platforms’ relationships with global governments will look like – particularly those governments that want to exercise control over their citizens. 

Dr Courtney Radsch, a journalist, scholar and advocate focused on the intersection of technology, media, and rights points out that we’re already seeing the ripple effects of these policies globally through the de-amplification of journalists and Meta’s news ban in Canada. 

“This leads to an increase of harassment of people using these services especially people who are already marginalized, it has led to a rise in extremist and right wing populism being expressed on these platforms around the world and led to what many see as a rise of degradation on these platforms due to a rise of what many see as AI generated crap that flourishes on these platforms,” Radsch shares. 

The monopoly of these platforms over communications also means that governments only need to ban access to one or two platforms to completely silence any dissenting voices or citizen-led communication, and as is clear from Meta’s catering to Trump, they could just as easily cater to the demands of other governments as well. 

 “They no longer put a strong emphasis on filtering out the mis- and disinformation so it’s easy for autocracies to use platforms as a channel to augment their voice and send their message across the board,” says Xiaomeng Lu, director of Geo-technology at Eurasia Group. 

Decentralising Control

However Eisenstat doesn’t believe that misinformation should be made illegal.

“The questions I think are more important is not how should these companies moderate misinformation but what is it about their design and structures where misinformation and salacious content is being amplified more than fact based information,” she says.

It’s important to be raising the right questions around tech policy and cutting through the noise these platforms are creating in order to be able to come up with long term solutions that can create a more decentralized control around digital spaces. Radsch also believes that there shouldn’t be content focused regulations. 

“There will always be propaganda, there has been throughout history, and platforms monetize this, they monetize engagement. Polarization and extremism do well, and the issue is less about a piece of misinformation and more about industry operations that have risen because it’s so profitable and because algorithms designed in a way to make platform money,” she says.

Cohn also points out that too much regulation may also have its own issues. “There is room to worry about to whether there’s too much centralized power about what is fact,” he says, adding “I think the answer lies somewhere else, in decentralization, like the AT protocol that Bluesky operates on , when people have the easy ability to build a network that taps into a protocol that a lot of other people are using, it becomes a lot more difficult to tap into that or control that.” 

Radsch further believes that the domination of these platforms needs to be broken up, and also needs to be seen in line with the rise of AI dominance, which she says cannot be separated from what we’re seeing in terms of social media platforms consolidating power. 

The answers to curbing power from platforms that have grown so big, and have so much control over the globe aren’t easy – and as authoritarianism rises across the world they may only seek to get more difficult. But the first step can come from changing the way we are asking the questions in the first place, and start questioning what drives these platforms instead of only questioning the content.