venue
stringclasses 5
values | paper_openreview_id
stringclasses 342
values | paragraph_idx
int64 1
314
| section
stringlengths 2
2.38k
| content
stringlengths 1
33.1k
⌀ |
|---|---|---|---|---|
ICLR.cc/2025/Conference
|
Ql7msQBqoF
| 17
|
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
|
builtin-array.mdcollections-persistent-vec.mdmath-fibonacci.mdrandom-dice.md Output Program:fun numIdenticalPairs(ns: Array[I32]): I32 => var count: I32 = 0for i in Range(0, ns.size() - 1) do for j in Range(i + 1, ns.size()) do if ns(i) == ns(j) thencount = count + 1...
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 26
|
2 ⇤
|
Our definition of deception aims to capture the nuances of indirect deceptive behavior, handlesituations where providing full information is infeasible due to communication constraints, andprovide a formalism that can be combined with existing decision making and RL algorithms. Wemeasure deception in terms of the regret incurred by the listener from receiving the speaker’scommunication. This regret can be defined as a function of the speaker’s actions, their effect on thelistener’s belief, and the effect of these updated beliefs on the listener’s reward, providing a formalismthat can be used as a reward function for the listener (e.g., to avoid deception) or as a metric (e.g., tomeasure if deception has occurred). By casting different intuitive notions of deception (i.e. the twosample reward functions) under the same regret umbrella, we provide a mathematical formalism thatsupports future algorithm design. Furthermore, the choice of reward for the listener allows granularityin specifying which types of outcomes one cares most about, whether it’s inducing correct beliefsover some or all of the variables, or other goals.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 27
|
2 ⇤
|
2S ⇡S atS btL ˆ⇡S ⇡L bt+1L at+1L rL 2.4 DEFINING UTILITIES FOR THE LISTENERDepending on the scenario, a listener may place different value on obtaining accurate information andon making correct or generally beneficial decisions. In this section, we show how different intuitivelyreasonable notions of deception can emerge from our definition above, simply by making differentchoices for the listener’s reward rL.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 28
|
2 ⇤
|
While on the surface it might seem strange to equate deception with causing suboptimal rewardsfor the listener, we argue that this general framework allows us to capture many of the intricaciesof deceptive interactions, including “white lies” and true but misleading statements, if the rewardfunction L is selected carefully. In the following subsections, we explore ways to define rL(s, aS) tocapture our intuition about what constitutes deceptive behavior. We will show how the “logical truth”definition in fact is subsumed by our more general definition for an appropriate choice of reward, butour definition can also capture more nuanced situations.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 29
|
2 ⇤
|
The natural starting point for L’s reward is to make it equal to the “task reward” ˆrL (e.g., a housebuyer might receive a higher reward for buying the right house). Defining the reward of L in thisway is reasonable in cases in which the “task reward” captures everything L cares about. This couldinclude utilities indicating that L does not care about being deceived as long as it improves outcome.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 30
|
2 ⇤
|
Figure 2: The interaction between the speaker and the listener is as follows: The listener L’s belief isupdated based on S’s action (interpreted according to L’s model of S’s behavior ˆ⇡S). The listenerwill make a decision and receive reward based on their updated belief. (2) where ˆrL is the listener’s “task reward”. The speaker is considered deceptive if the interaction withthe listener leaves them worse off in terms of expected “task reward”. The “task reward” capturesthe idea that people may care less about omissions or deception irrelevant to the task, such as Samtalking about how the house has a beautiful front porch when this is an embellishment and does notinfluence Luca’s opinion of how valuable the house is to them.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 31
|
2 ⇤
|
Deception as worsened outcomes: Deception as leading to worse beliefs: rL(s, aL) = bL(s), (3) where bL is the current listener belief, which we can obtain from the listener action as described inAppendix C.1. This definition can be thought of as a “score on a belief-accuracy test”: consider anexample scenario where L is answering questions on an exam administered by S. As L’s expectedvalue on this exam is the probability S assigns to the correct answer, we can formulate L’s rewardfunction as the proportion of questions they get correct on the exam. It is also straightforward toextend this construction to weight correct beliefs over some dimensions or even functions of the statemore highly – for example, we might potentially define the listener’s reward in the house example asthe probability they assign to the true monetary value of the house, which is a derived quantity thatdepends on the house’s features.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 32
|
2 ⇤
|
rL(s, aL) = ˆrL(s, aL),
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 33
|
2 ⇤
|
However, we claim that the regret formulation is expressive enough to capture a variety of intuitivenotions of deception. An obvious criticism might be that people might still feel deceived if theywere “tricked” into making a good decision. However, this can be captured simply by redefiningtheir reward: instead of receiving a reward only for a good decision, they also receive a rewardfor having an accurate belief over the state, or some subset of the state. For example, we userL(s, aL) = ˆrL(s, aL) + wbL(s), where ˆrL(s, aL) is the task reward and wR is a constant weight,the bL(s) term will provide for lower regret whenever the speaker changes the listener’s beliefs to bemore accurate, and higher regret when it makes their beliefs less accurate. Below we show how, for aspecific choice of rL(s, aS) in Equation (1), we can also capture the accuracy of beliefs in our metricfor deception.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 34
|
2 ⇤
|
We’ve shown how rL(s, aS) in Equation (1) can be defined for different notions of deception. Byquantifying deception as regret, we can define deception based on the beliefs or downstram taskreward of the listener which are induced by the speaker’s actions. Additionally, we’ve shown howone could combine them in practice.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 35
|
3 EXPERIMENTAL METHODOLOGYThe goal of our evaluation is to determine how well our proposed metric for deception aligns withhuman intuition. To that end, we have: (1) designed three scenarios to study deceptive behaviors;
|
Scenario Learned Regret (ours) LLMs Task Belief Combined GPT-4 LLaMa Google Bard Housing ScenarioNutrition ScenarioFriend Scenario (2) developed an interactive dialogue management system where we can deploy agents that aredeceptive to different degrees according to our proposed definition; (3) created a pipeline to measurethe deceptiveness of responses from an LLM in a negotiation task.
|
ICLR.cc/2025/Conference
|
ONfWFluZBI
| 11
|
2 CONTRASTIVE LEARNING FOR TIME-SERIES
|
Figure 1: DCL framework: The encoder h isshared across the reference yt, positive yt+1,and negative samples y−i . A dynamics modelˆf forward predicts the reference. A (possibly latent) variable z can parameterize thedynamics (cf. § 4) or external control (cf. § I).The model fits the InfoNCE loss (L). (4) and call the resulting algorithm dynamics contrastive learning (DCL). Intuitively, we obtain twoobserved samples (y, y′) which are first mapped to the latent space, (h(y), h(y′)). Then, the 1Note that we can equivalently write ϕ(˜h(x)), ˜h′(x′)) using two asymmetric encoder functions, see addi tional results in Appendix D.
|
ICLR.cc/2025/Conference
|
ONfWFluZBI
| 12
|
2 CONTRASTIVE LEARNING FOR TIME-SERIES
|
ψ(y, y′) := ϕ( ˆf (h(y)), h(y′)) − α(y′),
|
ICLR.cc/2025/Conference
|
KA2Rit4ky1
| 1
|
Title
|
PDETIME: RETHINKING LONG-TERM MULTIVARIATETIME SERIES FORECASTING FROM THE PERSPECTIVEOF PARTIAL DIFFERENTIAL EQUATIONS
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 36
|
3 EXPERIMENTAL METHODOLOGYThe goal of our evaluation is to determine how well our proposed metric for deception aligns withhuman intuition. To that end, we have: (1) designed three scenarios to study deceptive behaviors;
|
For the first experiment, we ask humans to rate the deceptiveness of each interaction in a seriesof conversational scenarios, and provide comparisons by measuring the correlations between ourapproach as outlined in Equation (1), human ratings, and baseline evaluations by three state-of-the-artLLMs (OpenAI, 2023; Touvron et al., 2023; Google, 2023). For the second experiment, we evaluateour dialogue management system by conducting a user study to measure the correlation betweenhuman rating after interacting with the system and the deceptive regret of the policy deployed. For ourthird experiment, we use an LLM to generate negotiation dialogues based on a standard negotiationdataset (Lewis et al., 2017b), ask humans to label the deceptiveness in these negotiations and measurethe correlation between human ratings and our deceptive regret. For our study with human participants,we received IRB approval and used CloudResearch Connect to recruit participants.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 37
|
3 EXPERIMENTAL METHODOLOGYThe goal of our evaluation is to determine how well our proposed metric for deception aligns withhuman intuition. To that end, we have: (1) designed three scenarios to study deceptive behaviors;
|
Table 1: Summary of correlation values between human deceptive labels and learned task regret (ours),belief regret (ours), combined regret (ours), and deceptive labels three LLMs for three different reallife scenarios where deception might occur. A larger correlation value is indicative of a method thataligns strongly with human intuitive notions of deceptive behavior. We find that the housing situationhas the least ambiguity when it comes to aligning with human notions of deception, with moreambiguity present for the nutrition and friend scenario. These results were statistically significant(p-value <0.001).
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 38
|
3.1 MEASURING DECEPTION IN CONVERSATIONAL SCENARIOSWe have designed three scenarios to capture how deception is perceived by humans in differentcontexts: a house bargaining interaction between a seller and a buyer, a consultation between anutritionist and a patient, and small talk between two colleagues. These have been designed toconsider different models of the listener, leading to differing ratings of deception (e.g., it is moredeceptive to lie about features of a house than lie about your hobbies). Each scenario consists ofthree features that can be either true or false. A sample interaction is shown in Figure 4. We providefurther details about the scenarios in Appendix D.
|
Scenario generation. We programmatically generate conversation scenarios for each situationdescribed in Appendix D.1, consisting of listener preferences and speaker actions. Similarly to howprior work Bakhtin et al. (2022) translates symbolic moves into natural language for Diplomacy,we use an LLM (gpt-3.5-turbo) (Brown et al., 2020) to wrap “symbolic” POMDP communicationactions from our model into natural text. We consider a setting in which the state consists of k = 3features, with Luca “interested” in a random subset of these features. The features are consideredindependently by Luca, and there are no correlations between features.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 39
|
3.1 MEASURING DECEPTION IN CONVERSATIONAL SCENARIOSWe have designed three scenarios to capture how deception is perceived by humans in differentcontexts: a house bargaining interaction between a seller and a buyer, a consultation between anutritionist and a patient, and small talk between two colleagues. These have been designed toconsider different models of the listener, leading to differing ratings of deception (e.g., it is moredeceptive to lie about features of a house than lie about your hobbies). Each scenario consists ofthree features that can be either true or false. A sample interaction is shown in Figure 4. We providefurther details about the scenarios in Appendix D.
|
User study setup. We show each of N = 50 users a series of 10 random scenarios for each situation(total of 1500 interactions), consisting of: 1) the true features (that are only known to Sam), 2) theprior belief b0L Luca has about such features, 3) which features Sam revealed to Luca (given that theparticipants are aware of the true features, they can determine whether Sam was truthful or not), and4) which features Luca cares about. For each scenario, participants were asked to rate whether theybelieve Sam’s behavior is deceptive on a 1-5 Likert scale, from “Strongly Disagree” to “StronglyAgree”. We describe our scenario sampling mechanism and provide details in Appendix D. Figure 4provides examples of the interactions users will see for the three real-life scenarios.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 40
|
3.2 DEVELOPING A DIALOGUE MANAGEMENT SYSTEMTo understand how a human’s perception of deception changes upon interaction with a system, wehave built a dialogue management system as shown in Figure 3 to simulate a real-world scenariowhere a human could be easily deceived. We chose to demonstrate this through the housing scenariodetailed in Figure 1, where a human must input their preferences and engage in dialogue with anonline representative (our model) who will share information about an available home. For this study,we have added further complexity to the scenario by increasing the number of features to eight and
|
including correlations between features, such that the human user cannot determine if the agent islying within a few rounds. Similar to the previous setup, we use an LLM (gpt-3.5-turbo) (Brownet al., 2020) to wrap actions from our model into natural text, this time selecting actions that eithermaximize or minimize the deceptive regret (task and/or belief utility) at random based on the housepreferences. For our user study to obtain deceptive human ratings, we have N = 30 users interactwith our system.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 49
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Matthew Aitchison, Lyndon Benke, and Penny Sweetser. Learning to deceive in multi-agent hiddenrole games.In Stefan Sarkadi, Benjamin Wright, Peta Masters, and Peter McBurney (eds.),Deceptive AI, pp. 55–75, Cham, 2021. Springer International Publishing. ISBN 978-3-030-917791.
|
ICLR.cc/2025/Conference
|
3bcN6xlO6f
| 41
|
6.1 MAIN RESULTS
|
GPT-4oGemini-1.5-ProClaude-3.5-SonnetLLaVA-VideoQwen2-VL-7BVidDiff (ours)
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 41
|
3.4 EVALUATIONWe explain the results from our three experiments below.
|
Q1: Does our definition of deception align with human judgment? We compare human deceptionscores from our user study against regrets calculated as per Equation (2) and Equation (3) bycomputing their correlation as shown in Table 1. We combine two reward terms (labeled “Combined”)to see whether that is able to better capture human intuitive notions of deception. To do so, we regresshuman deceptiveness labels on both our regret metrics individually and jointly. While using bothreward terms in conjunction improves predictions, the majority of the predictive power comes fromthe belief regret bL(s). We largely find that a combined regret formulation better captures humanintuitive notions of deception across all three scenarios, confirming our hypothesis from Section 2.3that both belief and task reward contribute to improving the correlation with human judgment. For thehousing scenario, we find a significant correlation of 0.67 between human responses and that shownby belief-based regret, and a correlation of 0.34 between human responses and task-reward-basedregret. This matches our intuition that humans primarily focus on the truthfulness of statementsmore than just outcomes (which is closer to a purely utilitarian perspective). We find the leastcorrelated values shown for the nutrition scenario, which might indicate that due to ambiguity in thelistener’s observation model, humans may be noisy when discerning whether deception is taking place.We found that for these two scenarios, humans ranked interactions as overall being less deceptive,whereas our model labeled them as being more deceptive comparatively. This might be indicativethat there might be additional reward terms that may capture the conservative labeling of humans andthe subjectivity of defining deception depending on the scenario.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 42
|
3.4 EVALUATIONWe explain the results from our three experiments below.
|
For multi-step conversations occurring as part of the dialogue management system, we found thecorrelation between deceptive ratings from humans and our formalism to be 0.72 for belief utility and0.45 for task utility respectively, slightly higher than the correlations of 0.67 and 0.34 when usersobserve interactions as shown in Table 1 for the housing interaction. This shows that our deceptionmetric has the ability to scale when the conversation contains the complexity present in the real-world,including correlations in beliefs and Q2: How do LLM judgments compare at discerning deception? LLMs have been shown tosometimes be successful in performing data annotation, sometimes even surpassing human annotatorquality (Pan et al., 2023; He et al., 2023; Wang et al., 2021). We explore how well LLM evaluationscorrelate with human judgments about deceptiveness in Table 1. The purpose of this evaluation is toexamine whether or not it is trivial to infer the degree of deception in these statements. In particular, we use three state-of-the-art LLMs (OpenAI, 2023; Touvron et al., 2023; Google, 2023) with the sameprompt that was given to the human annotators, asking whether each given interaction is deceptive –and compare the LLM deception labels with those in the user study. We find that even very large,state-of-the-art LLMs, such as GPT-4, do not make deceptiveness judgments on these examples thatalign as well with user intuition as even the worst choice of reward for our approach. Overall, wefind GPT-4 aligning more than Google Bard and LLaMa across all three situations, respectively.Overall, these experiments validate our hypothesis that our formalism can be effective in estimatingthe “degree of deceptiveness” of human interactions and that our proposed formulation aligns withhuman intuition. For an initial exploration of how to create non-deceptive agents, see Appendix D.2.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 43
|
3.4 EVALUATIONWe explain the results from our three experiments below.
|
Q3: How can we leverage a regret theory of deception to measure deception from LLMs?Due to the increasing concern that LLMs could be used to deceive and manipulate people on a largescale, we generated sample negotiations for the Deal or No Deal Lewis et al. (2017a) to demonstratea case of deception. Although we had humans only rate 30 dialogues, we generated a total of 500dialogues to ensure a range of diverse strategies employed by agents in conversation, and by extension,a larger range of deceptive regret values. We have found there to be a correlation of 0.82 betweenhuman ratings of deception for the subset of conversations and our deceptive regret model, showingthat human intuition agrees with the labels we assign. We expect that these labels may be leveragedas rewards for learning deceptive and non-deceptive LM models in the future.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 44
|
4 RELATED WORK
|
Deception in social psychology and philosophy. Deception has been defined and analyzed throughphilosophy (Masip et al., 2004; Martin, 2009; Todd, 2013; Fallis, 2010; Mahon, 2016; Sakamaet al., 2014) and psychology (Kalbfleisch & Docan-Morgan, 2019; Zuckerman et al., 1981; Whaley,1982). To our knowledge, the most comprehensive definition (Masip et al., 2004) integrates thework of several researchers on lying (Coleman & Kay, 1981) and deceptive communication (Miller& Stiff, 1993), considering deception as the act of deliberately hiding, altering, or manipulatinginformation—through words or actions—to mislead others and maintain a false belief. However,these definitions are mostly qualitative and are difficult to turn into precise mathematical statementsthat could be leveraged as objectives for training autonomous agents that embody various degreesof deception. Our definition formalizes deception within POMDPs, and is designed to be used as areward function to build non-deceptive agents. Importantly, our work is inspired by work in moralpsychology that contrasts utilitarianism, which aims to maximize the overall well-being (Driver,2022), with deontological philosophies, which posit inviolable moral rules that do not vary with thesituation (Greene, 2007). Our formalism allows both utilitarian and belief perspectives of deception tobe represented by a regret formulation that can be used as a utility measure. Several works also definedeception depending on whether or not the listener is aware (i.e., coercion and rational persuasion)(Todd, 2013) or unaware (i.e., lying or manipulation) (Noggle, 2022) of deceptive influence. Ourwork represents both as we do not make any assumptions about the listener (i.e., the listener uses amodel that may or may not assume the speaker often lies).
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 50
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané.
|
ICLR.cc/2025/Conference
|
hWF0HH8Rr9
| 75
|
5 CONCLUSION
|
J Terry, Benjamin Black, Nathaniel Grammel, Mario Jayakumar, Ananth Hari, Ryan Sullivan, Luis S Santos,Clemens Dieffendahl, Caroline Horsch, Rodrigo Perez-Vicente, et al. Pettingzoo: Gym for multi-agentreinforcement learning. Advances in Neural Information Processing Systems, 34:15032–15043, 2021.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 45
|
4 RELATED WORK
|
Deception in language models and mitigation. With the development of LLMs with emergentcapabilities (Wei et al., 2022), there has been a growing concern that these models may exhibitdeceptive tendencies (Kenton et al., 2021). This occurs due to the model having misspecifiedobjectives, leading to harmful content (Richmond, 2016) and manipulative language (Roff, 2020).Our work can potentially help address this misalignment Amodei et al. (2016) by providing a definitionof deception that can modify the objective function or constrain the behavior of reinforcement learningagents to avoid deceptive tendencies. Several methods have focused on detecting deception in humantext by using language models with manual feature annotation (Fitzpatrick & Bachenko, 2012),contextual information (Fornaciari et al., 2021), and textual data in a supervised manner (Shahriaret al., 2021; Zee et al., 2022; Tomas et al., 2022). These methods have been extended to detectingdeception in spoken dialogue by learning multi-modal models through supervised learning (Hosomiet al., 2018; Soldner et al., 2019) and asking questions to improve estimates (Tsunomori et al., 2015).However, they may not cover the range of deceptive capabilities of LLMs as they only classify eachutterance independently. Our work instead takes advantage of the sequential nature of interactions inAI systems in defining deception. We also differ from work on adversarial attacks Franzmeyer et al.(2023); Tondi et al. (2018) as we provide a general regret formulation under which the deceptivebehavior of the speaker can be defined, quantified, and used as a way in which to label utterances inconversations with varying levels of deceptiveness. With respect to work on training agents to be non-deceptive Hubinger et al. (2024), we would like to acknowledge that our formalism allows asystem designer to capture the nuance in defining deception depending on the scenario.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 46
|
4 RELATED WORK
|
Deception in multi-agent systems and robotics. Our work approaches deception from the view ofsequential decision making problems, considering the effect of communication actions on a listener’sbeliefs. While expressing deception as changes in beliefs has been examined in prior work (Taylor& Whitehill, 1981; McWhirter, 2016; Gmytrasiewicz, 2020; Ward et al., 2023), our work convertsbelief-based definitions of deception into utility measures that can be used in reinforcement learning toavoid deceptive tendencies. Moreover, recent works Sarkadi et al. (2019); Adhikari & Gmytrasiewicz(2021); Ederer & Min (2022); Sarkadi (2018) have used communication or game theory to modeldeception of an agent with a theory of mind under uncertainty, and other game theoretic approachesSantos & Li (2009); Chelarescu (2021); Aitchison et al. (2021) have analyzed deception from autilitarian perspective. Masters et al. (2021) has provided a qualitative account of deception in AI,and Park et al. (2023b) defines deception as the inducement of false beliefs when trying to achieve anoutcome other than the true one. In contrast, our work provides a general framework that capturesboth belief-based and utility-based deception and quantifies deception as a continuous quantity,allowing us to measure the “degree of deceptiveness” of a speaker toward a listener. Additionally,while these methods assume that the speaker is intentionally deceptive by using a theory of mind, ourwork assumes that the speaker can be intentionally or non-intentionally deceptive, which depends onboth the specific setting at hand and whether or not the speaker can access ground truth information.Lastly, several works have studied deception in non-verbal behavior, such as robot motion planningthat deceives a person or makes it hard to infer intentions (Wagner & Arkin, 2011; Shim & Arkin,2012; 2013; Dragan et al., 2015; Tomas et al., 2022; Ayub et al., 2021; Masters & Sardina, 2017).While our work approaches deception from the view of sequential decision making, it makes noassumptions on the action space, allowing it to be defined for both symbolic and textual forms ofcommunication.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 47
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
REFERENCESMarwa Abdulhai, Isadora White, Charlie Snell, Charles Sun, Joey Hong, Yuexiang Zhai, Kelvin Xu,and Sergey Levine. Lmrl gym: Benchmarks for multi-turn reinforcement learning with languagemodels, 2023.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 48
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Sarit Adhikari and Piotr J. Gmytrasiewicz. Telling friend from foe - towards a bayesian approach tosincerity and deception. In Dongxia Wang, Rino Falcone, and Jie Zhang (eds.), Proceedings ofthe 22nd International Workshop on Trust in Agent Societies (TRUST 2021) Co-located with the20th International Conferences on Autonomous Agents and Multiagent Systems (AAMAS 2021),London, UK, May 3-7, 2021, volume 3022 of CEUR Workshop Proceedings. CEUR-WS.org, 2021.URL http://ceur-ws.org/Vol-3022/paper7.pdf.
|
ICLR.cc/2025/Conference
|
ONfWFluZBI
| 9
|
2 CONTRASTIVE LEARNING FOR TIME-SERIES
|
In contrastive learning, we aim to model similarities between pairs of data points (Figure 1). Our fullmodel ψ is specified by the log-likelihood log pψ(y|y+, N ) = ψ(y, y+) − log (cid:88) exp(ψ(y, y−)). (2) y−∈N ∪{y+} where y is often called the reference or anchor sample, y+ is a positive sample, y− ∈ N are negativeexamples, and N is the set of negative samples. The model ψ itself is parameterized as a compositionof an encoder, a dynamics model, and a similarity function and will be defined further below. We fitthe model by minimizing the negative log-likelihood on the time series, minψ L[ψ] = min ψ Et,t1,...,tM ∼U (1,T )[− log pψ(yt+1|yt, {ytm}M m=1)] (3) where positive examples are just adjacent points in the time-series, and M negative examples aresampled uniformly across the dataset. U (1, T ) denotes a uniform distribution across the discrete timesteps.
|
ICLR.cc/2025/Conference
|
XwibrZ9MHG
| 10
|
1 INTRODUCTION
|
Table 2: Feature comparison of the PokeFlex dataset with other deformable object datasets.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 51
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Concrete problems in ai safety, 2016.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 52
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Ali Ayub, Aldo Morales, and Amit Banerjee. Using markov decision process to model deception forrobotic and interactive game applications. In 2021 IEEE International Conference on ConsumerElectronics (ICCE), pp. 1–6, 2021. doi: 10.1109/ICCE50685.2021.9427633.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 53
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson,Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson,Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, KamileLukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado,Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec,Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, TomHenighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei,Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai: Harmlessnessfrom ai feedback, 2022.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 54
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, AndrewGoff, Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, MinaeKwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sasha Mitts, Adithya Renduchintala,Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David Wu, Hugh Zhang, andMarkus Zijlstra. Human-level play in the game of <i>diplomacy</i> by combining language modelswith strategic reasoning. Science, 378(6624):1067–1074, 2022. doi: 10.1126/science.ade9097.URL https://www.science.org/doi/abs/10.1126/science.ade9097.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 55
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler,Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, ScottGray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, IlyaSutskever, and Dario Amodei. Language models are few-shot learners, 2020. URL https://arxiv.org/abs/2005.14165.
|
ICLR.cc/2025/Conference
|
9DrPvYCETp
| 23
|
3 SHARED RECURRENT MEMORY TRANSFORMER
|
reflecting the agent’s performance during the previous step. The future rewards are discounted by afactor 0 ≤ γ ≤ 1 defining their importance. Before the next step, each agent also receives its localobservation o(u) ∈ O based on the following global observation function π(u)(a(u) | h(u)) : T × A → [0, 1].
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 56
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Thomas L. Carson. Lying and Deception: Theory and Practice. New York: Oxford University Press, Paul Chelarescu. Deception in social learning: A multi-agent reinforcement learning perspective, 2021. URL https://arxiv.org/abs/2106.05402.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 57
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, AdamRoberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh,Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, NoamShazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, JamesBradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, KevinRobinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph,Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M.Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, RewonChild, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, MarkDiaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean,Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. URLhttps://arxiv.org/abs/2204.02311.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 58
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Linda Coleman and Paul Kay. Prototype semantics: The english word lie. Language, 57:26, 03 1981.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 59
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Anca D. Dragan, Rachel Holladay, and Siddhartha S. Srinivasa. Deceptive robot motion: synthesis, analysis and experiments. Autonomous Robots, 39:331–345, 2015.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 60
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Julia Driver. The History of Utilitarianism. In Edward N. Zalta and Uri Nodelman (eds.), TheStanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Winter2022 edition, 2022.
|
ICLR.cc/2025/Conference
|
ONfWFluZBI
| 10
|
2 CONTRASTIVE LEARNING FOR TIME-SERIES
|
To attain favourable properties for identifying the latent dynamics, we carefully design the hypothesisclass for ψ. The motivation for this particular design will become clear later. To define the full model, acomposition of several functions is necessary. Recall from Eq. 1 that the dynamics model is given as fand the mixing function is g. Correspondingly, our model is composed of the encoder h : RD (cid:55)→ Rd(de-mixing), the dynamics model ˆf : Rd (cid:55)→ Rd, the similarity function ϕ : Rd × Rd (cid:55)→ R and acorrection term α : Rd (cid:55)→ R. We define their composition as1
|
ICLR.cc/2025/Conference
|
XwibrZ9MHG
| 11
|
1 INTRODUCTION
|
✓
|
ICLR.cc/2025/Conference
|
XwibrZ9MHG
| 12
|
1 INTRODUCTION
|
Meshes
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 61
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Florian Ederer and Weicheng Min. Bayesian persuasion with lie detection. Technical Report 30065,National Bureau of Economic Research, May 2022. URL http://www.nber.org/papers/w30065.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 62
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Don Fallis. Lying and deception. Philosophers’ Imprint, 10, 2010.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 63
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Eileen Fitzpatrick and Joan Bachenko. Building a data collection for deception research.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 64
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
InProceedings of the Workshop on Computational Approaches to Deception Detection, pp. 31–38, Avignon, France, April 2012. Association for Computational Linguistics. URL https://aclanthology.org/W12-0405.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 65
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Tommaso Fornaciari, Federico Bianchi, Massimo Poesio, and Dirk Hovy. BERTective: Languagemodels and contextual information for deception detection. In Proceedings of the 16th Conferenceof the European Chapter of the Association for Computational Linguistics: Main Volume, pp.2699–2708, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.eacl-main.232. URL https://aclanthology.org/2021.eacl-main.232.
|
ICLR.cc/2025/Conference
|
ONfWFluZBI
| 34
|
3 , ρ = 28 and system noise standard deviation σϵ = 0.001.
|
Evaluation metrics. Our metrics are informed by the result in Theorem 1 and measure empiricalidentifiability up to affine transformation of the latent space and its underlying linear or non-lineardynamics. All metrics are estimated on the dataset the model is fit on. See Appendix F for additionaldiscussion on estimating metrics on independently sampled dynamics.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 66
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Tim Franzmeyer, Stephen Marcus McAleer, Joao F. Henriques, Jakob Nicolaus Foerster, Philip Torr,Adel Bibi, and Christian Schroeder de Witt. Illusory attacks: Detectability matters in adversarialattacks on sequential decision-makers. In The Second Workshop on New Frontiers in AdversarialMachine Learning, 2023. URL https://openreview.net/forum?id=8kQBjQ6Dol.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 67
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Piotr J. Gmytrasiewicz. How to do things with words: A bayesian approach. J. Artif. Intell. Res., 68:753–776, 2020. URL https://api.semanticscholar.org/CorpusID:221324549.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 68
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and KaterinaSedova. Generative language models and automated influence operations: Emerging threats andpotential mitigations, 2023. URL https://arxiv.org/abs/2301.04246.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 69
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Google. Bard, 2023. URL https://bard.google.com/.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 70
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Joshua Greene. Why are vmpfc patients more utilitarian? a dual-process theory of moral judgmentexplains. Trends in cognitive sciences, 11:322–3; author reply 323, 09 2007. doi: 10.1016/j.tics.2007.06.004.
|
ICLR.cc/2025/Conference
|
Ql7msQBqoF
| 18
|
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
|
Retrieval builtin-array.mdcollections-persistent-vec.mdmath-fibonacci.md...
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 71
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. Decoupling strategy and generation in negotiation dialogues, 2018a.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 72
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. Decoupling strategy and generation in negotiation dialogues, 2018b. URL https://arxiv.org/abs/1808.09637.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 73
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu,Nan Duan, and Weizhu Chen. Annollm: Making large language models to be better crowdsourcedannotators, 2023.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 74
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Naoki Hosomi, Sakriani Sakti, Koichiro Yoshino, and Satoshi Nakamura. Deception detection andanalysis in spoken dialogues based on fasttext. In 2018 Asia-Pacific Signal and InformationProcessing Association Annual Summit and Conference (APSIPA ASC), pp. 139–142, 2018. doi:10.23919/APSIPA.2018.8659614.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 75
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, TameraLanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, AnshRadhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, KamalNdousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse,Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky,Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, RyanGreenblatt, Buck Shlegeris, Nicholas Schiefer, and Ethan Perez. Sleeper agents: Training deceptivellms that persist through safety training, 2024.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 76
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134, 1998.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 77
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Pamela J. Kalbfleisch and Tony Docan-Morgan. Defining Truthfulness, Deception, and Related Concepts, pp. 29–39. Springer International Publishing, Cham, 2019.ISBN 978-3319-96334-1. doi: 10.1007/978-3-319-96334-1_2. URL https://doi.org/10.1007/978-3-319-96334-1_2.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 78
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul Crook, Y-Lan Boureau, and Jason Weston.Recommendation as a communication game: Self-supervised bot-play for goal-oriented dialogue,2019. URL https://arxiv.org/abs/1909.03922.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 79
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. Alignment of language agents, 2021.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 80
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi,and Maarten Sap. Prosocialdialog: A prosocial backbone for conversational agents, 2022. URLhttps://arxiv.org/abs/2205.12688.
|
ICLR.cc/2025/Conference
|
DnBjhWLVU1
| 7
|
1 INTRODUCTION
|
The rest of this paper is organized as follows. Section 2 reviews studies on weight magnitude andregularization methods. In Section 3, we explain weight rescaling and propose a novel regularizationmethod, Soft Weight Rescaling. Then, in Section 4, we evaluate the effectiveness of Soft WeightRescaling by comparing it with other regularization methods across various experimental settings.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 81
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? endto-end learning for negotiation dialogues, 2017a. URL https://arxiv.org/abs/1706.05125.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 82
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? end-to-end learning for negotiation dialogues, 2017b.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 83
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2021. URL https://arxiv.org/abs/2109.07958.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 84
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Jingjing Liu, Stephanie Seneff, and Victor Zue. Dialogue-oriented review summary generation forspoken dialogue recommendation systems. In Human Language Technologies: The 2010 AnnualConference of the North American Chapter of the Association for Computational Linguistics, pp.64–72, Los Angeles, California, June 2010. Association for Computational Linguistics. URLhttps://aclanthology.org/N10-1008.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 85
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
James Edwin Mahon. The definition of lying and deception. In Edward N. Zalta (ed.), The StanfordEncyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Winter 2016 edition,2016.
|
ICLR.cc/2025/Conference
|
DnBjhWLVU1
| 20
|
1 , bc
|
W c l ← cl · Wl, bcl ← (cid:32) l (cid:89) (cid:33) ci · bl i=1 Then, for all input x, it satisfies fθc (x) = Cfθ(x). A detailed proof can be found in Appendix A.
|
ICLR.cc/2025/Conference
|
09LEjbLcZW
| 1
|
Title
|
AUTOKAGGLE: A MULTI-AGENT FRAMEWORK FORAUTONOMOUS DATA SCIENCE COMPETITIONS
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 86
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Clancy Martin. The Philosophy of Deception. Oxford University Press, 07 2009.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 87
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
ISBN9780195327939. doi: 10.1093/acprof:oso/9780195327939.001.0001. URL https://doi.org/10.1093/acprof:oso/9780195327939.001.0001.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 88
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Jaume Masip, Eugenio Garrido, and Carmen Herrero. Defining deception. Anales de Psicología, 2004.ISSN 0212-9728. URL https://www.redalyc.org/articulo.oa?id=16720112.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 89
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Peta Masters and Sebastian Sardina. Deceptive path-planning. In Proceedings of the 26th InternationalJoint Conference on Artificial Intelligence, IJCAI’17, pp. 4368–4375. AAAI Press, 2017. ISBN9780999241103.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 90
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Peta Masters, Wally Smith, Liz Sonenberg, and Michael Kirley. Characterising deception in ai: Asurvey. In Deceptive AI: First International Workshop, DeceptECAI 2020, Santiago de Compostela,Spain, August 30, 2020 and Second International Workshop, DeceptAI 2021, Montreal, Canada,August 19, 2021, Proceedings 1, pp. 3–16. Springer, 2021.
|
ICLR.cc/2025/Conference
|
DnBjhWLVU1
| 38
|
4 EXPERIMENTS
|
CIFAR-100(CNN-BN)0.3234 ± 0.00530.4222 ± 0.00430.4030 ± 0.01050.4129 ± 0.0105
|
ICLR.cc/2025/Conference
|
DnBjhWLVU1
| 39
|
4 EXPERIMENTS
|
TinyImageNet(VGG-16)0.3912 ± 0.01420.3915 ± 0.01080.3870 ± 0.01430.4348 ± 0.0025
|
ICLR.cc/2025/Conference
|
O6znYvxC1U
| 36
|
0 K0)
|
(5) This identity is exact in the linear-width limit and holds in general without assumption on X, y, aslong as the integral DΦ is uniform on the space of orthogonal matrices.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 91
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Gregory McWhirter. Behavioural deception and formal models of communication. The BritishJournal for the Philosophy of Science, 67(3):757–780, 2016. doi: 10.1093/bjps/axv001. URLhttps://doi.org/10.1093/bjps/axv001.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 92
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Gerald R. Miller and James B. (James Brian) Stiff. Deceptive communication / Gerald R. Miller,James B. Stiff. Sage series in interpersonal communication ; v. 14. Sage Publications, NewburyPark, Calif., 1993. ISBN 080393484X.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 93
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Robert Noggle. The ethics of manipulation. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Summer 2022 edition, 2022.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 94
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
OpenAI. Gpt-4, 2023. URL https://openai.com/research/gpt-4.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 95
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Alexander Pan, Chan Jun Shern, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, JonathanNg, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. Do the rewards justify the means?measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark, 2023.
|
ICLR.cc/2025/Conference
|
DnBjhWLVU1
| 40
|
4 EXPERIMENTS
|
To verify whether SWR works effectively with learning rate schedulers commonly used in supervised learning, we conducted additional experiments where the learning rate decays at specificepochs. Detailed results are provided in Appendix E.
|
ICLR.cc/2025/Conference
|
Nfd7z9d6Bb
| 56
|
4 NUMERICAL EXPERIMENTS
|
Figure 6: Marginal coverage for multi-target datasets, 50 replications. Sample size was set to 1000.Nominal coverage equals (1
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 96
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior, 2023a.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 97
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Peter S. Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. Ai deception: A survey of examples, risks, and potential solutions, 2023b.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 98
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Sheldon Richmond. Superintelligence: Paths, dangers, strategies. Philosophy, 91(1):125–130, 2016.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 99
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
H Roff. Ai deception: When your artificial intelligence learns to lie. IEEE Spectrum: https://spectrum.ieee. org/automaton/artificial-intelligence/embedded-ai/ai-deception-when-your-ai-learns-to-lie.ET, 29:2021, 2020.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 100
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Chiaki Sakama, Martin Caminada, and Andreas Herzig. A formal account of dishonesty. LogicJournal of the IGPL, 23(2):259–294, 12 2014. ISSN 1367-0751. doi: 10.1093/jigpal/jzu043. URLhttps://doi.org/10.1093/jigpal/jzu043.
|
ICLR.cc/2025/Conference
|
ONfWFluZBI
| 45
|
6 RESULTS
|
Symmetric encoders cannot identify non-trivial dynamics. In the more general case where thedynamics dominates the system behavior, the baseline cannot identify linear dynamics (or morecomplicated systems). In the general LDS and SLDS cases, the baseline fails to identify the groundtruth dynamics (Table 1) as predicted by Corollary 1 (rows marked with ✗). For identity dynamics,the baseline is able to identify the latents (R2=99.56%) but breaks as soon as linear dynamics areintroduced (R2=73.56%).
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 101
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Eugene Santos and Deqing Li. On deception detection in multiagent systems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(2):224–235, 2009.
|
ICLR.cc/2025/Conference
|
YaRzuMaubS
| 102
|
7 ETHICS STATEMENT.We acknowledge that our formalisms may pose non-negligible ethical risks. They could be especiallydangerous if used for targeted deceptive advertising, recommendation systems, and dialogue systems.We discourage the use of deceptive AI systems for malicious purposes or harmful manipulation. Wehope this research provides grounding for how to define deception in decision making and buildsystems that can mitigate and defend against deceptive behaviors from both humans and AI systems.This work offers a concrete definition of deception under the formalism of decision-making. Weexpect our work to only be a step in the direction of formally quantifying and understanding deceptionin autonomous agents: while our definitions provide a working formalism, they may leave open edgecases. A key area of future work is to generalize these definitions to settings that reflect realisticdomains of machine learning, such as dialogue systems, robotics, and advertising. Large-scaleapplications may include reward terms that prevent deception and detection methods. Exploringthese applications may not only lead to practically useful systems aligned with human values but alsosuggest ways to formalize deception in autonomous agents.
|
Stefan Sarkadi. Deception. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI’18, pp. 5781–5782. AAAI Press, 2018. ISBN 9780999241127.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.