Skip to Content
InformationInformation
  • Article
  • Open Access

17 October 2025

Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration

,
and
1
Computer Science Department, Universidad Rey Juan Carlos, 28001 Madrid, Spain
2
Applied Mathematics Department, Universidad Rey Juan Carlos, 28001 Madrid, Spain
*
Author to whom correspondence should be addressed.

Abstract

The rapid integration of generative artificial intelligence (AI) into creative workflows is transforming design from a human-driven activity into a synergistic process between humans and AI systems. Yet, most current tools still operate as linear “executors” of user commands, which fundamentally clashes with the non-linear, iterative, and ambiguous nature of human creativity. Addressing this gap, this article introduces a conceptual framework of five irreducible paradoxes—ambiguity vs. precision, control vs. serendipity, speed vs. reflection, individual vs. collective, and originality vs. remix—as core design tensions that shape human–AI co-creative systems. Rather than treating these tensions as problems to solve, we argue they should be understood as design drivers that can guide the creation of next-generation co-creative environments. Through a critical synthesis of existing literature, we show how current executor-based AI tools (e.g., Microsoft 365 Copilot, Midjourney) fail to support non-linear exploration, refinement, and human creative agency. This study contributes a novel theoretical lens for critically analyzing existing systems and a generative framework for designing human–AI collaboration environments that augment, rather than replace, human creative agency.

1. Introduction

The integration of generative artificial intelligence (AI) into creative workflows is transforming design from a predominantly human-driven activity into a synergistic process between humans and AI systems, a paradigm known as human–AI co-creation [1]. We define human–AI co-creation as a collaborative partnership where AI and humans operate as a cohesive system, engaging in a dynamic interchange to produce outcomes that exceed the creative potential of any single agent [2,3]. This differs fundamentally from a tool-based model, where AI executes deterministic commands. Instead, in a co-creative model, AI contributes generatively and interpretively throughout the creative process, acting as an active collaborator rather than a passive instrument [4,5]. This paradigm shift necessitates a deeper understanding of the informational dynamics, challenges, and prerequisites required for such partnerships to truly augment human creativity [6].
As AI becomes more and more integrated into everyday human activities, we need to reevaluate how we perceive and use this quickly evolving technology. Non-academic discourse posits that AI has the potential to automate repetitive work, freeing up human attention for more creative endeavors. However, the rise of creative AI is blurring traditional automation boundaries. The authors of [2] outline three different uses for creative AI: (1) comprehension (since creative processes require some level of comprehension), (2) representation (using artificial intelligence to fill in the gaps in existing datasets), and (3) generation (including text-to-image synthesis and visual transformation to create original outputs). In essence, by integrating (generative) AI tools, systems, and agents, human–AI co-creativity has the potential to significantly improve human creative powers beyond what is now typical for (non-enhanced) human creativity. This paradigm change necessitates a more thorough comprehension of these co-creative relationships, related difficulties, and the prerequisites for augmenting AI [3]. A collaborative model, in which AI contributes generatively and interpretively to the creative process, has replaced a tool-based model, where AI merely executes deterministic commands, similar to a search engine retrieving results.
From the standpoint of information science, this change necessitates a reassessment of the ways in which information is exchanged, understood, and changed in a human–AI partnership. Whereas the AI needs precise, low-level information inputs (prompts, parameters), the human gives ambiguous, high-level information needs (vision, intent, style) [4]. Many collaboration failures stem from this basic mismatch in information behavior. In order to enable the intricate information connections that define creative labor, including exploration, reflection, and serendipitous discovery, this study contends that effective co-creative systems must be built as information-rich environments that can translate over this gap.
However, most current AI tools still operate as linear ‘executors’ of user commands, which fundamentally clashes with the non-linear, iterative, and ambiguous nature of human creativity [7,8]. We contend that this mismatch points to deeper, inherent tensions in the collaboration. This article argues that the design space for human–AI co-creation is fundamentally shaped by a set of irreducible paradoxes. These paradoxes—which we identify as ambiguity vs. precision, control vs. serendipity, speed vs. reflection, individual vs. collective, and originality vs. remix—are not problems to be solved but essential dynamics to be managed. While the limitations of current ‘executor-model’ AI tools are well-documented—such as their linear workflow, lack of support for refinement, and struggle with ambiguity—the prevailing research approach has been to address these as discrete technical or interaction-level problems to be solved. For instance, studies focus on improving prompt engineering [4], mitigating specific user experience pitfalls [9], or modeling interaction patterns [10].
However, this problem-solving approach overlooks a more fundamental issue: these limitations are not isolated flaws but surface manifestations of deeper, irreducible tensions inherent in the collaboration between human and artificial cognition. The current paradigm lacks a conceptual framework that explains why these tensions exist and how they can constructively shape design, rather than being eliminated.
Therefore, the critical research gap this article addresses is the lack of a foundational theoretical lens for understanding the core, paradoxical tensions that define the design space of human–AI co-creativity. Without this lens, system design remains reactive, focusing on patching specific interaction failures without guiding the creation of truly synergistic co-creative environments.
This article aims to provide and develop a conceptual framework for the development of co-creative systems between humans and artificial intelligence. We contend that these tensions should be viewed as fundamental design forces rather than as issues that need to be resolved. This paradigm offers a generative roadmap for developing next-generation co-creative environments that enhance rather than replace human creative agency, as well as a theoretical perspective for examining current systems.

Shift from AI as a Tool to as a Collaborator

Collaboration between humans and creative systems is crucial because it boosts creativity, introduces new viewpoints, promotes continuous learning, and solves complex issues. In order for a system to be deemed autonomously creative, it must be capable of creative behavior, such as coming up with original concepts or solutions on its own without assistance from humans [6]. This raises the question of whether generative AI tools are inherently creative. The foundation of this type of creativity is machine learning, which gives algorithms the ability to learn, adapt, and react in ways that can be considered “intelligent”—and hence, potentially creative [7]. But the argument over whether technical systems are truly creative goes beyond science and turns into a philosophical discussion about appearing vs. being. This discussion centers on the possible drawbacks of generative AI. According to perspectives, AI’s dependence on pre-existing data would limit it to exhibiting “incremental creativity,” raising doubts about the breadth and genuineness of its creative output [11,12]. Non-academic discourse posits that genuine creativity is an exclusive domain of humankind, contingent upon our singular ability to experience profound emotions and exercise empathy [10].
Throughout the creative process, humans employ a wide variety of creative techniques, thought processes, and concepts, and the final product evolves dynamically over time. The agent must be flexible in order to keep up with this constant flow of ideas. Furthermore, the role and interactions of the co-creative AI are not always explicit throughout the co-creation process. For instance, the human may want to take the lead and let the AI aid with certain duties. At other times, the human may wish for the AI to simulate a more proactive role, generating unexpected ideas or working semi-autonomously within a defined context to explore a solution space [13]. Numerous generative design methods currently in use are inadequate in taking human variables into account, which restricts their capacity to take into account the entire range of human skills and limitations, and affective reactions that must be taken into account in order to advance true human-centered product and service innovation [14]. Critical issues like role ambiguity, cognitive overload, authorship, uncertainty about outcomes and control conflicts between users and AI agents have also been found by recent studies [8,15,16].
Generative AI is bringing about a revolutionary new era known as “human–AI co-creation,” in which AI ceases to be a passive instrument and instead becomes an active, cooperative partner. The goal of this collaboration is to generate collective creativity that is superior to what either AI or humans could achieve on their own. We believe that although AI can improve human creativity and automate activities, its development blurs conventional lines and calls for a better comprehension of co-creative dynamics. Because of the significant shortcomings caused by this mismatch, based on studies [5,13,17,18], we propose five irreducible paradoxes—originality vs. remix, speed vs. reflection, control vs. serendipity, and ambiguity vs. precision—that fundamentally shape the design space for human–AI co-creative systems. We suggest that these paradoxes offer a critical lens through which to examine current systems and produce fresh design frameworks, and that they are not issues to be resolved but rather necessary conflicts to be handled.

3. Core Paradoxes in Human–AI Co-Creative Design

This section elaborates on the five irreducible paradoxes that form our conceptual framework for analyzing and designing human–AI co-creative systems. These tensions are not problems to be solved but essential dynamics to be managed. The Table 1 provides a concise overview.
Table 1. Summary of five paradoxes in human–AI co-creative systems.

3.1. Ambiguity vs. Precision

Advanced generative AI systems like GPT-4 are being adopted at a rapid pace, revolutionizing human–technology interaction by enabling conversational, intuitive problem-solving using natural language across a variety of applications [45]. AI models are trained on vast amounts of data, designed to provide precise outputs, but on the other side, human input may contain ambiguous or emotive language, which leads to the following question: Without limiting initial creative exploration, how can we create interfaces that serve as “ambiguity translators,” assisting users in gradually refining vague intentions into prompts?
There is a fundamental mismatch where users struggle to translate their vision into executable commands, leading to frustration because human creative thought is inherently ambiguous, expressed through abstract concepts and subjective language; on the other side, AI systems require precision and explicit parameters to function predictably [27,30]. This leads to a fundamental conflict: whereas users can use vague cues to obtain new and imaginative results from the AI, the underlying AI model itself needs some level of accuracy and predictable parameters in order to work well. Therefore, the challenge is to create systems that can understand human uncertainty as a source of creative potential rather than as noise that needs to be removed, all the while giving the model the structured input it needs to produce logical conclusions.
Here, ambiguity translators do not mean to design an input box but to design interaction loops. The answer will not be provided as just a single output, but rather multiple designs; for instance, in [42], the integration of artificial intelligence (AI), more especially text-to-image (T2I) generators like Midjourney, into the conceptual design stage of interior design education was investigated. The results showed that AI-assisted visualization improved conceptual precision, sped up design iteration, and improved ideation. A fundamental question is introduced by this possibility [5]: where do we draw the boundary between creativity and human–AI systems? Determining the source of creativity—the human, the AI, or the cooperation itself—is the main challenge. This unresolved question critically shapes how we evaluate creative outputs, forcing a choice between viewing AI as a mere tool or as a genuine creative partner. The same fundamental contradiction is discussed in Section 2.1—the conflict between the nature of present AI systems and human creativity—and presented from the perspective of linearity vs. non-linearity.
To navigate the ambiguity vs. precision paradox, system design must move beyond single-turn prompt boxes. A more effective approach is to implement multi-turn refinement loops that function as “ambiguity translators.” This can be achieved through interfaces that ask clarifying questions, suggest refinements, and maintain conversational context. Pilot systems in design education, such as those explored by [42], demonstrate how iterative dialogue with AI can enhance conceptual precision and ideation, effectively building precision from ambiguity over multiple turns.
From an HCI perspective, resolving this paradox involves designing interfaces that function as ‘ambiguity translators.’ These systems should use iterative, multi-turn dialogues to help users refine their vague intentions into actionable prompts [42]. Instead of locking the user into a single output cycle, the interface should offer structural guidance, present relevant information, and ask clarifying questions to decompose the user’s high-level vision into manageable chunks. This process enhances the user experience by collaboratively building precision from ambiguity.
A complementary approach involves designing fluid roles. By enabling the AI to shift across a spectrum of roles—from a supportive tool that executes precise commands to an active co-creator that proposes ideas—the system can ensure that the pursuit of precision does not prematurely restrict the creative process. Deliberately allowing for a degree of role ambiguity has been shown to be necessary for creative potential, as it permits the collaboration to evolve and new, emergent roles to form dynamically [13,24].

3.2. Control vs. Serendipity

The creative process has a contingency. There are numerous ways to achieve creative success—or failure—from the very beginning of a concept to its eventual adoption by the target audience. Serendipity is the experience of making an unexpected and beneficial discovery through a combination of chance and a ‘prepared mind’ [46,47]. We argue that this is paradoxical since it necessitates the simultaneous presence of two seemingly incompatible elements: i) lack of control and ii) agency and control. Discovery must commence with an unforeseen, unplanned incident that lies beyond the individual’s direct control or intention. It is a disruption to the expected course of events [46,47]. To strike a balance of agency and control, the person must be knowledgeable, sensitive, and cognitively prepared (sagacious) enough to see the accident’s potential and be able to use it expertly and purposefully to produce something original and worthwhile [48].
Although users can instruct AI to produce “surprising” results, this sometimes involves trial and error with ambiguous instructions, producing random outputs that may not be significantly related to the user’s original purpose. The design difficulty of going beyond this is addressed by the control vs. serendipity contradiction. We suggest methods that are designed to provide contextually appropriate surprises, such as lateral variants or stylistic opposites depending on the state of the project, while explicitly providing the user with curation tools and ‘veto power’. The user’s sagacity (or prepared mind) can identify and incorporate unexpected but beneficial recommendations, turning serendipity from a passive, random event into an active, collaborative activity.
Here, creativity arises from the tension between passively accepting chance and actively, skillfully manipulating it; it is neither completely accidental nor purely agential. Pure agency lacks the disruptive spark of the unexpected, whereas pure chance is passive and “blind,” according to Ross. At the exact point where these two opposing forces converge, serendipity occurs [49]. Serendipity is valuable if the user trusts the AI’s output. This locus of control can be determined by concluding what the optimal balance between AI-generated suggestions and user veto power is. We argue that optimal balance is a critical topic that crosses a technical specification and delves deeper into the concept of interaction design. The system does not aim for a flawless predetermined optimum.
Sagacity levels vary from person to person, so they depend on the cognitive state of the user. The balance is a dynamic interaction intended to promote the circumstances rather than a static arrangement, but to foster serendipity, it might be possible through several ways, such as actively countering enhancement of algorithms [50], and another key point is to control repetition, preventing the system from providing the same core answer with only superficial changes in vocabulary. Another way is to support, not explore [51], as AI should be a catalyst that broadens the user’s horizon, working as the digital counterpart of “visiting the library, going to the stacks, or going to the seemingly unrelated seminar.” While users inherently possess veto power in any interactive system, the ideal balance for co-creativity is one that deliberately designs for and emphasizes this veto authority as the critical manifestation of the ‘prepared mind’ required for serendipitous discovery [51].
This vetoing and curating process involves active, critical communication with the AI rather than just rejection. Think about a writer who collaborates with an AI. An initially implausible plot twist may be suggested by the AI. ‘This twist doesn’t fit my character’s motivation, but it offers me the notion to introduce a hidden nemesis,’ the writer says in response to the veto, which is more than just a rejection. Here, the ‘poor’ suggestion served as a springboard for a fresh, creative concept, illustrating how the conflict between AI suggestion and human curation co-creates serendipity. The human partner evaluates the output according to their unique creative intent, media knowledge, and content topical expertise. When an AI fails to live up to a user’s high expectations, it frequently serves as a catalyst for goal definition and improved iteration. Consequently, dual literacy is necessary for efficient co-creation: in-depth knowledge of the subject matter and artistic medium, as well as an intuitive grasp of the AI’s potential and constraints.
Furthermore, vetoing serves as an active reasoning process that strengthens the user’s role in abductive thinking, which is crucial for serendipity. It facilitates the synchronization of prior information with apparent anomalies [52]. This cognitive curation is crucial for spotting valuable deviations; by discarding irrelevant suggestions, the user frees attentional space to recognize and use unexpected insights that may otherwise be overlooked in a stream of homogenized recommendations. The conflict between AI-proposed options and human veto power creates a dynamic in which algorithmic breadth is balanced by expert discernment, fostering settings conducive to unique and meaningful discovery. This tension is further complicated by the different goals of each agent: AI models are often optimized to generate a statistically ‘probable’ or optimal output based on their training data, whereas human creators frequently seek ‘good-enough’ results that satisfy a unique, situated intent [18,39].

3.3. Speed vs. Reflection

AI is not affected by physiological factors like fatigue [53], yet it operates under the expectation of generating inventive content continuously and at scale. AI’s speed sets a hard bar for human creators, but it also reduces the training requirements for artists and creatives. AI’s ability to learn from human creations sets a higher standard for innovation. The objective is not to slow down AI for its own sake, but rather to intentionally foster human judgment, contextualization, and intentionality. The importance of speed is based on theories of divergent thinking and brainstorming, in which postponing judgment and creating a large number of ideas are important stages in the creative process. In human–AI co-creative design, reflection [54] denotes the metacognitive process of critical thought, evaluation, and integration. The tension arises when AI systems accelerate processes, for instance, text generation, but this acceleration unintentionally limits time available for human reflective skills such as critical thinking, contextual interpretation, and creative intent. This creates a contradiction between efficiency and depth of comprehension.
It is important to clarify that speed and reflection are not literal opposites, but rather represent a symbolic design tension between two cognitive modes. The notion of “speed” in this paradox refers to the system’s tendency toward automation, acceleration, and output optimization, while “reflection” denotes the human capacity for metacognition, contextual reasoning, and deliberate evaluation. Following the reviewer’s valuable suggestion, this paradox can also be interpreted through the lens of closure-driven vs. process-driven creativity (as in Myers–Briggs typologies), where the contrast lies not in temporal velocity but in cognitive orientation. We therefore retain the title “Speed vs. Reflection” to preserve consistency across the five paradoxes, while clarifying that it represents a broader conceptual duality between efficiency and depth of thought in human–AI collaboration. For example, a use case in radiology where AI is used to pre-screen medical pictures such as MRIs or X-rays provides the clearest discussion of this dilemma. By identifying possible abnormalities, the AI expedites the initial diagnosis procedure; this is seen as a productivity issue. It might, however, inadvertently cut down on the amount of time radiologists devote to a comprehensive, reflective examination of every scan. Because the chance for in-depth, personalized contemplation diminishes with the amount of images to be processed in a time unit, the creative intent in diagnosis—which entails creating a patient-specific story from subtle, contextual clues—cannot develop adequately in high-speed settings [55]. We argue that a primary focus on using AI for speed within a system’s design can inadvertently marginalize the reflective techniques that make human specialists competent and capable of dealing with novel situations in the long term. Here, the “speed” path is alluring. It provides instant advantages in productivity and efficiency. However, such a design risks fostering over-reliance, where a generation of professionals may experience attentional deskilling—a degradation of critical thinking abilities due to a lack of practice—as the system’s speed and efficiency reduce the necessity for deep, independent analysis [56]. Another study [57] proposed that this tension can be addressed not by choosing between speed and reflection, but by designing AI systems to support and provoke reflection. This approach aims to make human professionals more profoundly effective, rather than just mechanically faster, by augmenting their critical reasoning. Future co-creative systems must be purposefully built with strategic “pause points” or “friction” that encourage critical review and incorporation of AI-generated content in order to reconcile the conflict between speed and contemplation. In order to ensure that AI’s speed enriches rather than diminishes the depth of human creative cognition, interfaces that promote annotation of outputs, comparative analysis of many possibilities, and suggested reflection prompts can be included.

3.4. Individual vs. Collective

The paradox is profound and nuanced, and it gets to the core of the growing interaction between humans and AI. When a single human creator (the individual) collaborates with an artificial intelligence that is, by definition, an embodiment of a collective—trained on aggregated data, patterns, and outputs from large swathes of human culture and knowledge—a fundamental tension occurs. Here we define two sides of this paradox: One of them is human (individual) with the goal of uniqueness, coherent creative vision. The other is an AI model (collaborative) involving generic human knowledge or wisdom of all data. This paradox manifests when the output of the collective (the AI) misaligns with the vision of the individual (the human creator). For instance, AI-powered design tools like Copilot and Midjourney often follow a linear sequence of exact instructions to approximate design objectives. These procedures contravene creative design guidelines, limiting AI agents’ ability to accomplish creative tasks [18]. This tension raises crucial research questions: How do designers and AIs settle creative disputes and which interface features work best for negotiating a common course? These inquiries are the paradox’s practical expressions, looking for ways to address the discrepancy between individual intentions and group production.
A study [58] demonstrates how AI influences human narrative, pointing to a type of implicit negotiation in which people integrate AI-generated concepts into their own original work. Assimilation and compromise are ways of “settling” creative disputes. Additionally, the study also demonstrates that hybrid human–AI networks attained the greatest diversity over time, indicating that creative synergy can still result from straightforward, anonymous, iterative collaboration (without formal negotiation interfaces). This suggests that minimally controlled interactions can help humans and AIs negotiate their creative differences. According to a different study [59], complementing cooperation rather than overt bargaining is how creative “disputes” are settled. While AI delivers size, pattern recognition, and quick generation, humans give context, intentionality, and moral judgment. Both can contribute their strengths thanks to this synergy without one taking over the other, On the other hand, the study also supports open-ended, adaptable technologies that let humans direct the creative process while utilizing AI’s capacity to produce concepts and variations. The ideal “interface” is one that lets AI manage extensive pattern synthesis while still permitting human oversight and contextual input.
We might draw the conclusion that a conflict is inevitable when a single creator interacts with this collective reflection. Because it is based on statistical probability, the AI will tend toward the most probable choice, which often aligns with the traditional or common patterns in its training data. If the dataset leans toward unconventional or ‘unsafe’ practices, the output would likely reflect that instead. While intuitively, a system capable of learning a user’s distinctive style appears to be a logical resolution to this paradox, such an approach carries a significant risk. An excessive dependence on personalization can inadvertently suppress creativity, trapping the user in an algorithmic echo chamber that merely amplifies their established habits. Consequently, the objective shifts from engineering a flawless style replica to creating interfaces that empower users to consciously modulate their interaction with the collective knowledge base. Practical implementations could include adjustable parameters governing the weight of personal history against diverse stylistic datasets, or the intentional injection of incongruous or contrasting aesthetics from the collective to stimulate novel thinking. This ensures the AI functions as a conduit to a wider creative universe, rather than a simple reflection of the user’s own predispositions.
This paradox can be reframed as a fundamental shift in the human creator’s role: from a hands-on craftsperson to a creative director. The craftsperson is deeply involved in the manual execution and technical details of creation. In contrast, the creative director is skilled in formulating a high-level vision, briefing collaborators (both human and AI), and curating generated outputs to align with that vision. The most valuable skill becomes creative direction—the ability to articulate intent, guide the AI’s generative process through effective prompting and parameter setting, and make discerning choices from a vast array of possibilities. The individual’s genius no longer lies solely in manual dexterity or solitary ideation, but in the capacity to remix, refine, and focus the chaos of collective intelligence into a novel, coherent whole that bears their unique signature.

3.5. Originality vs. Remix

AI systems may generate unique material using data and algorithms, a concept known as originality. This notion evaluates the parallels and differences between AI and human creativity, taking into account both technical and ethical considerations. AI-generated material is considered original based on its unique methodologies, datasets, and results [60]. On the other side, the theory of remix, defined by [61] as “Copy, transform, and combine”, is a recursive algorithm for making new works from existing resources. The paradox is that this very ability—to quickly remix and regenerate content—enables both extraordinary production and a troubling erosion of creative variation.
We argue that generative AI embodies the remix concept. It works by quantitatively evaluating a large collection of human inventions (the ultimate remix source material) and identifying patterns, styles, and relationships within it. Its output is a recombination of these previously learnt patterns; it is the result of an extreme algorithmic remix. On the other hand, the output of AI appears to meet the criterion for originality. It can create a new image, text, or design that did not previously exist in that exact form. It can result in work that is unique (new) and frequently worthwhile. To a human observer, the outcome can appear unique, unexpected, and inventive. This creates irreconcilable tension from both remix and originality perspectives. It pulls us in two directions at once, exposing weaknesses and limitations in our long-held conception of uniqueness and creativity. According to findings from a study [62], AI can help to promote and deepen design originality, but it may be restricted in its ability to generate creativity itself. When paired with human ingenuity, AI improves design processes by providing both efficiency and originality. This suggests that value is not in the AI-generated remix, but in the human’s capacity to control, pick from, and infuse it with personal vision and context. The AI handles the computationally demanding “remix” (variation generation), whereas the human gives the “original” artistic direction.
The process of generative AI significantly questions the underlying concepts of originality and remix. According to Gunkel [63], the system works by processing statistical data rather than sampling content, converting cultural objects into mathematical embeddings that describe hidden patterns. This ontological shift implies that using the binary of “original” (a privileged source) and “remix” (a derivative copy) is a fundamental misapplication of categories.

4. Discussion

This study contributes a novel conceptual framework to the field of information science to critically analyze human–AI co-creation. This paper makes the case for a paradigm change away from the simplistic idea of AI as a tool and toward an active, communicative partner. The approach does a good job of showing that incorporating generative AI into creative workflows is a fundamental shift rather than a piecemeal one. This change calls for a reexamination of traditional HCI paradigms, which frequently place a higher value on efficiency and accuracy than on the ambiguity, inquiry, and serendipity that are essential to human creativity. A turn-based, linear interaction model frequently predominates, as demonstrated by systems such as Copilot [21] and Midjourney [26], which reduces the person to a “command coder” instead of a “creative partner.” The non-linear, iterative, and emergent aspects of true creative cooperation are suppressed by this structure [27,28,30].
In order to objectively examine human–AI co-creation, this study offers a new conceptual framework for information science. Beyond merely identifying interaction patterns, the proposed paradoxes convey the fundamental tensions that characterize user information behavior and AI system information supply within a cooperative dyad. The framework presents these tensions as necessary conflicts to be managed in the design of co-creative systems, rather than as problems to be solved. This paradigm provides a creative basis for creating new tools as well as a powerful analytical lens for evaluating current ones. The ambiguity vs. precision conundrum, for example, draws attention to a fundamental inconsistency: human creative cognition is abstract and subjective, yet artificial intelligence (AI) needs specific criteria to operate. Creating interfaces that function as “ambiguity translators” through iterative, multi-turn discourse is a viable remedy [42].
Similarly, the delicate balance necessary for fruitful collaboration is encapsulated in the control vs. serendipity conundrum. It reinterprets the user’s “veto power” as an active, advantageous part of “abductive thinking” and cognitive curation [51,52], which are crucial for chance discovery, rather than as an AI failure. The originality vs. remix conundrum is especially important since it calls into question basic ideas about authorship and creativity. According to this paper, the ultimate remix engine is generative AI, which operates on statistical patterns in large amounts of training data [61]. However, its results can seem new and different. As a result, the human’s ability to provide creative guidance—curating, improving, and adding context and personal perspective to the AI’s output—becomes more useful than the AI’s inherent generative ability [62]. This suggests that mastery of creative direction and curation, rather than mastery of a medium, may be the most important talent for aspiring producers. The description of the executor position [18] brings up a significant point: AI frequently generates results that are not explicable. One valuable tactic for resolving a number of paradoxes is the application of explainable AI (XAI) concepts. One way to close the ambiguity–precision gap and give users more agency would be to disclose the “operational strategies” an AI employed to interpret a prompt.
The foundation of our approach is a critical synthesis of the body of existing literature (input), which consistently identifies flaws in the “executor”-model AI tools that are currently available [17,20,25]. We see them not as isolated issues but rather as signs of more serious, irreducible conflicts that arise when humans and AI work together. Thus, we deduce that five fundamental paradoxes shape the design space for co-creative systems.

4.1. Interdisciplinary Connections

The proposed paradoxes resonate strongly with established concepts in other fields. The ambiguity vs. precision tension echoes the psychological study of divergent vs. convergent thinking, where creativity requires both open-ended ideation (ambiguity) and focused evaluation (precision). The individual vs. collective paradox is a microcosm of sociological debates about individual agency versus social structure, examining how a creator’s unique voice interacts with the vast, culturally embedded dataset of the AI. Furthermore, the speed vs. reflection tension aligns with critiques from the philosophy of technology, which warn of the potential for tools to shape human habits and values, in this case, potentially privileging speed over deep thought. Acknowledging these connections enriches our framework, positioning human–AI co-creativity not merely as a technical challenge, but as a profound socio-technical phenomenon.
The successful adoption of co-creative systems hinges on broader user and societal acceptance. This requires fostering new user literacies, such as prompt fluency and critical curation skills, to effectively direct AI collaborators. Furthermore, societal debates concerning authorship, authenticity, and the value of “human-made” work must be addressed. Systems that transparently frame AI as a tool for augmenting, rather than replacing, human creativity will be more readily accepted, shifting the narrative from “AI-made” to “human-directed.”

4.2. Theoretical Implications

By framing the difficulties of human–AI co-creation as a system of basic, constructive paradoxes, this study represents a substantial conceptual breakthrough. It moves the emphasis of study away from technical expertise and toward the subtleties of interaction design, cognitive cooperation, and the essence of creativity itself. The framework enhances theories in information science and HCI by offering a novel vocabulary and a critical perspective for examining the informational dynamics and underlying conflicts in co-creative dyads. The persistent emergence of pitfalls such as those documented by [9] underscores that the challenges in human–AI co-creation are not transient bugs but symptoms of deeper, paradoxical tensions. This validates our position that a shift in perspective is needed: from solving these issues technically to managing them dynamically through thoughtful interaction design. Our paradox framework provides the ‘why,’ while catalogues of pitfalls offer the ‘what,’ together creating a more complete picture for guiding future research.
The establishment of a paradox-driven framework is this article’s main contribution to the fields of information science and human–AI interaction. We synthesize these problems into a system of five basic, irreducible tensions, in contrast to previous work that lists individual difficulties [5,12,16,17,31,32]. This reframes the design problem: the objective is to handle these contradictions dynamically through careful interaction design, not to solve them technically. In addition to giving scholars a new vocabulary and critical lens to examine the informational dynamics and fundamental tensions present in co-creative dyads, this framework offers the fundamental “why” behind frequent cooperation failures.

4.3. Ethical Considerations in Co-Creative Systems

The ethical dimensions of human–AI co-creation are deeply embedded within the paradoxical tensions of our framework. The originality vs. remix paradox confronts foundational questions of intellectual property and attribution. When generative models produce outputs derived from extensive training datasets, it creates ambiguity regarding ownership. The rights of the human prompter, the AI developers, and the original creators whose works informed the algorithm all require consideration. This ambiguity challenges conventional copyright laws and underscores the need for novel legal and technical models to establish clear provenance and contribution [61,63].
Concurrently, the individual vs. collective paradox highlights the risks of algorithmic bias and cultural homogenization. Systems trained on aggregated data inherently encode and can amplify the biases within that corpus. Consequently, a designer’s unique vision may be systematically steered toward statistically dominant patterns, potentially stifling cultural diversity and reinforcing hegemonic aesthetic or narrative conventions. Mitigating this requires both equipping creators to identify these biases and ensuring systemic transparency regarding training data origins and model limitations.
Finally, the speed vs. reflection paradox carries implications for professional competency and agency. An over-dependence on AI for expedited ideation and execution may precipitate attentional deskilling [56], eroding the capacity for deep critical analysis and masterful craftsmanship. Therefore, an ethical approach to co-creative design must prioritize the augmentation of human cognition, safeguarding the professional’s ultimate authorship, judgment, and control over the creative process.

4.4. Future Work and Limitations

A principal constraint of this study is the conceptual origin of its framework, which, while informed by identified shortcomings in contemporary systems, necessitates rigorous empirical testing to evaluate its real-world utility, effect on design practices, and significance within various collaborative domains. Subsequent investigations should prioritize implementing this model in controlled environments, for instance, by constructing and evaluating prototype interfaces with integrated mechanisms to navigate each paradox—such as iterative dialogue for managing ambiguity, adjustable controls for serendipity, and built-in cues for reflection—and benchmarking their performance against conventional executor-based tools in practical creative scenarios. Additionally, a critical consideration of the framework’s boundaries is essential, recognizing that these paradoxes constitute persistent dynamic trade-offs. This understanding compels a deeper inquiry into the longitudinal cognitive and societal consequences of human–AI co-creation, particularly the dangers of eroded critical thinking skills, excessive dependency on automation, and the dilution of unique artistic expression through an over-reliance on aggregated, AI-generated content.

5. Conclusions

We present the field of information science with a crucial tool to help direct the development of future co-creative information systems that are not only more potent but also more intuitive, helpful, and ultimately more human by presenting these difficulties as irreducible paradoxes. The future of creative design, according to this article, lies in rethinking AI as an active, opinionated collaborator. This partnership necessitates an evolution in the human role from a sole creator to a creative director who orchestrates the collaboration. This director provides the vision, intentionality, and curation, while leveraging the AI’s power for exploration, variation, and pattern synthesis. We have defined the fundamental issues as five irreducible paradoxes by combining criticisms of the current linear “executor” systems. The fundamental design area for co-creation between humans and AI is defined by these tensions which the developers of co-creative systems must manage dynamically. We suggest that instead of trying to solve these paradoxes, the goal should be to create systems that can manage them dynamically while promoting a cooperative relationship. The ultimate objective is to enhance human creativity by making sure AI serves as an inspiration rather than a limitation, enabling people to continue being the deliberate, creative leaders at the center of the process.

Author Contributions

Conceptualization, Z.S. and R.H.-N.; methodology, Z.S. and R.H.-N.; validation, Z.S. and R.H.-N.; formal analysis, Z.S., R.H.-N. and C.P.; investigation, Z.S. and R.H.-N.; resources, Z.S., R.H.-N. and C.P.; writing—original draft preparation, Z.S.; writing—review and editing, Z.S. and R.H.-N.; visualization, R.H.-N. and Z.S.; supervision, R.H.-N. and C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by research grants PID2022-137849OB-I00 funded by MI CIU/AEI/10.13039/501100011033 and by the ERDF, EU.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Serbanescu, A.; Nack, F. Human-AI system co-creativity for building narrative worlds. In Proceedings of the IASDR 2023: Life-Changing Design, Milan, Italy, 9–13 October 2023; Design Research Society: London, UK, 2023. [Google Scholar]
  2. De Vries, K. You never fake alone. Creative AI in action. Inf. Commun. Soc. 2020, 23, 2110–2127. [Google Scholar] [CrossRef]
  3. Melville, N.P.; Robert, L.; Xiao, X. Putting humans back in the loop: An affordance conceptualization of the 4th industrial revolution. Inf. Syst. J. 2023, 33, 733–757. [Google Scholar] [CrossRef]
  4. Haj-Bolouri, A.; Conboy, K.; Gregor, S. Research Perspectives: An Encompassing Framework for Conceptualizing Space in Information Systems: Philosophical Perspectives, Themes, and Concepts. J. Assoc. Inf. Syst. 2024, 25, 407–441. [Google Scholar] [CrossRef]
  5. Haase, J.; Pokutta, S. Human-AI Co-Creativity: Exploring Synergies Across Levels of Creative Collaboration. arXiv 2024, arXiv:2411.12527. [Google Scholar] [CrossRef]
  6. Jennings, K.E. Developing Creativity: Artificial Barriers in Artificial Intelligence. Minds Mach. 2010, 20, 489–501. [Google Scholar] [CrossRef]
  7. Mateja, D.; Heinzl, A. Towards Machine Learning as an Enabler of Computational Creativity. IEEE Trans. Artif. Intell. 2021, 2, 460–475. [Google Scholar] [CrossRef]
  8. Chiou, E.K.; Lee, J.D. Trusting Automation: Designing for Responsivity and Resilience. Hum Factors 2023, 65, 137–165. [Google Scholar] [CrossRef] [PubMed]
  9. Buschek, D.; Mecke, L.; Lehmann, F.; Dang, H. Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems. arXiv 2021, arXiv:2104.00358. [Google Scholar] [CrossRef]
  10. Haase, J.; Hanel, P.H.P. Artificial muses: Generative artificial intelligence chatbots have risen to human-level creativity. J. Creat. 2023, 33, 100066. [Google Scholar] [CrossRef]
  11. Boden, M.A. Computer Models of Creativity. AI Mag. 2009, 30, 23–34. [Google Scholar] [CrossRef]
  12. Cropley, D.; Cropley, A. Creativity and the Cyber Shock: The Ultimate Paradox. J. Creat. Behav. 2023, 57, 485–487. [Google Scholar] [CrossRef]
  13. Rezwana, J.; Maher, M.L. Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems. ACM Trans. Comput.-Hum. Interact. 2023, 30, 1–28. [Google Scholar] [CrossRef]
  14. Demirel, H.O.; Goldstein, M.H.; Li, X.; Sha, Z. Human-Centered Generative Design Framework: An Early Design Framework to Support Concept Creation and Evaluation. Int. J. Hum. -Comput. Interact. 2024, 40, 933–944. [Google Scholar] [CrossRef]
  15. Chen, V.; Liao, Q.V.; Wortman Vaughan, J.; Bansal, G. Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–32. [Google Scholar] [CrossRef]
  16. Gmeiner, F.; Yang, H.; Yao, L.; Holstein, K.; Martelaro, N. Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–20. [Google Scholar] [CrossRef]
  17. Moruzzi, C.; Margarido, S. A User-centered Framework for Human-AI Co-creativity. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–9. [Google Scholar] [CrossRef]
  18. Zhou, J.; Li, R.; Tang, J.; Tang, T.; Li, H.; Cui, W.; Wu, Y. Understanding Nonlinear Collaboration between Human and AI Agents: A Co-design Framework for Creative Design. arXiv 2024, arXiv:2401.07312. [Google Scholar] [CrossRef]
  19. Davis, N.; Hsiao, C.-P.; Popova, Y.; Magerko, B. Zagalo, N., Branco, P., Eds.; An Enactive Model of Creativity for Computational Collaboration and Co-creation. In Creativity in the Digital Age; Springer: London, UK, 2015; pp. 109–133. ISBN 978-1-4471-6681-8. [Google Scholar]
  20. Mamykina, L.; Candy, L.; Edmonds, E. Collaborative creativity. Commun. ACM 2002, 45, 96–99. [Google Scholar] [CrossRef]
  21. Stallbaumer, C. Introducing Copilot for Microsoft 365. Microsoft 365 Blog. Available online: https://www.microsoft.com/en-us/microsoft-365/blog/2023/03/16/introducing-microsoft-365-copilot-a-whole-new-way-to-work/ (accessed on 26 August 2025).
  22. Lopes, D.; Correia, J.; Machado, P. EvoDesigner: Towards Aiding Creativity in Graphic Design. In Artificial Intelligence in Music, Sound, Art and Design; Martins, T., Rodríguez-Fernández, N., Rebelo, S.M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 162–178. [Google Scholar]
  23. Frich, J.; MacDonald Vermeulen, L.; Remy, C.; Biskjaer, M.M.; Dalsgaard, P. Mapping the Landscape of Creativity Support Tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–18. [Google Scholar] [CrossRef]
  24. Kantosalo, A.; Jordanous, A. Role-Based Perceptions of Computer Participants in Human-Computer Co-Creativity; AISB: London, UK, 2021; pp. 20–26. Available online: https://aisb.org.uk/wp-content/uploads/2021/04/cc_aisb_proc.pdf (accessed on 26 August 2025).
  25. Liapis, A.; Yannakakis, G.N.; Togelius, J. Computational Game Creativity. 2014. Available online: https://www.um.edu.mt/library/oar/handle/123456789/29473 (accessed on 26 August 2025).
  26. Tan, L.; Luhrs, M. Using Generative AI Midjourney to enhance divergent and convergent thinking in an architect’s creative design process. Des. J. 2024, 27, 677–699. [Google Scholar] [CrossRef]
  27. Gero, J.S. Design Prototypes: A Knowledge Representation Schema for Design. AI Mag. 1990, 11, 26. [Google Scholar] [CrossRef]
  28. Gero, J.S.; Kannengiesser, U. The situated function–behaviour–structure framework. Des. Stud. 2004, 25, 373–391. [Google Scholar] [CrossRef]
  29. Hatchuel, A.; Weil, B. A New Approach of Innovative Design: An Introduction to C-K Theory. In DS 31: Proceedings of ICED 03, the 14th International Conference on Engineering Design, Stockholm; 2003; pp. 109–110. Available online: https://www.designsociety.org/publication/24204/a_new_approach_of_innovative_design_an_introduction_to_c-k_theory (accessed on 13 October 2025).
  30. Howard, T.J.; Culley, S.J.; Dekoninck, E. Describing the creative design process by the integration of engineering design and cognitive psychology literature. Des. Stud. 2008, 29, 160–180. [Google Scholar] [CrossRef]
  31. Girotto, V. Collective Creativity through a Micro-Tasks Crowdsourcing Approach. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, San Francisco, CA, USA, 27 February–2 March 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 143–146. [Google Scholar] [CrossRef]
  32. Koivisto, M.; Grassini, S. Best humans still outperform artificial intelligence in a creative divergent thinking task. Sci. Rep. 2023, 13, 13601. [Google Scholar] [CrossRef]
  33. Grassini, S.; Koivisto, M. Artificial Creativity? Evaluating AI Against Human Performance in Creative Interpretation of Visual Stimuli. Int. J. Hum. -Comput. Interact. 2024, 41, 4037–4048. [Google Scholar] [CrossRef]
  34. Guzik, E.E.; Byrge, C.; Gilde, C. The originality of machines: AI takes the Torrance Test. J. Creat. 2023, 33, 100065. [Google Scholar] [CrossRef]
  35. Erwin, A.K.; Tran, K.; Koutstaal, W. Evaluating the predictive validity of four divergent thinking tasks for the originality of design product ideation. PLoS ONE 2022, 17, e0265116. [Google Scholar] [CrossRef]
  36. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  37. Hwang, A.H.-C. Too Late to be Creative? AI-Empowered Tools in Creative Processes. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 29 April –5 May 2022; ACM: New Orleans, LA, USA, 2022; pp. 1–9. [Google Scholar] [CrossRef]
  38. Guo, X.; Xiao, Y.; Wang, J.; Ji, T. Rethinking Designer Agency: A Case Study of Co-Creation Between Designers and AI. IASDR Conference Series. 2023. Available online: https://dl.designresearchsociety.org/iasdr/iasdr2023/fullpapers/170 (accessed on 13 October 2025).
  39. Lee, S.; Law, M.; Hoffman, G. When and How to Use AI in the Design Process? Implications for Human-AI Design Collaboration. Int. J. Hum. -Comput. Interact. 2025, 41, 1569–1584. [Google Scholar] [CrossRef]
  40. Baltà-Salvador, R.; El-Madafri, I.; Brasó-Vives, E.; Peña, M. Empowering Engineering Students Through Artificial Intelligence (AI): Blended Human–AI Creative Ideation Processes with ChatGPT. Comput. Appl. Eng. Educ. 2025, 33, e22817. [Google Scholar] [CrossRef]
  41. Ege, D.N.; Øvrebø, H.H.; Stubberud, V.; Berg, M.F.; Steinert, M.; Vestad, H. Benchmarking AI design skills: Insights from ChatGPT’s participation in a prototyping hackathon. Proc. Des. Soc. 2024, 4, 1999–2008. [Google Scholar] [CrossRef]
  42. Karadağ, D.; Ozar, B. A new frontier in design studio: AI and human collaboration in conceptual design. Front. Archit. Res. 2025, in press. [Google Scholar] [CrossRef]
  43. Weisz, J.D.; Muller, M.; He, J.; Houde, S. Toward General Design Principles for Generative AI Applications. arXiv 2023, arXiv:2301.05578. [Google Scholar] [CrossRef]
  44. Dehghani Champiri, Z. UX Design & Evaluation of HealthQB: A Mobile Application to Manage Chronic Pain. Available online: https://summit.sfu.ca/item/35168 (accessed on 22 December 2023).
  45. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; Association for Computational Linguistics: Minneapolis, MN, USA, 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
  46. Ross, W. The possibilities of disruption: Serendipity, accidents and impasse driven search. Possibility Stud. Soc. 2023, 1, 489–501. [Google Scholar] [CrossRef]
  47. Foster, M.I.; Keane, M.T. The Role of Surprise in Learning: Different Surprising Outcomes Affect Memorability Differentially. Top. Cogn. Sci. 2019, 11, 75–87. [Google Scholar] [CrossRef] [PubMed]
  48. Ross, W.; Vallée-Tourangeau, F. Microserendipity in the Creative Process. J. Creat. Behav. 2021, 55, 661–672. [Google Scholar] [CrossRef]
  49. Weisberg, R.W. On the Usefulness of “Value” in the Definition of Creativity. Creat. Res. J. 2015, 27, 111–124. [Google Scholar] [CrossRef]
  50. Finn, E. What Algorithms Want: Imagination in the Age of Computing; MIT Press: Cambridge, MA, USA, 2017; ISBN 978-0-262-03592-7. [Google Scholar]
  51. Lisete, B. Serendipity: Obstacles and facilitators. J. Arts Humanit. Soc. Sci. 2025, 2, 50–56. [Google Scholar] [CrossRef]
  52. Fortes, G. Abduction. In The Palgrave Encyclopedia of the Possible; Glăveanu, V.P., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 1–9. ISBN 978-3-030-90913-0. [Google Scholar]
  53. Ayoub, K.; Payne, K. Strategy in the Age of Artificial Intelligence. J. Strateg. Stud. 2016, 39, 793–819. [Google Scholar] [CrossRef]
  54. Bykova, E.A. Reflection as a Factor in the Success of Learners’ Innovative Activity. Lurian J. 2022, 3, 36–45. [Google Scholar] [CrossRef]
  55. Wilkens, U.; Field, A.E. Creative Intent and Reflective Practices for Reliable and Performative Human-AI Systems. Schriftenreihe Der Wiss. Ges. Für Arb.-Und Betriebsorganisation (WGAB) 2023, 2023, 77–94. [Google Scholar] [CrossRef]
  56. Attewell, P. The Deskilling Controversy. Work Occup. 1987, 14, 323–346. [Google Scholar] [CrossRef]
  57. Abdel-Karim, B.M.; Pfeuffer, N.; Carl, K.V.; Hinz, O. How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work1. MIS Q. 2023, 47, 1395–1424. [Google Scholar] [CrossRef]
  58. Shiiku, S.; Marjieh, R.; Anglada-Tort, M.; Jacoby, N. The Dynamics of Collective Creativity in Human-AI Hybrid Societies. arXiv 2025, arXiv:2502.17962. [Google Scholar] [CrossRef]
  59. Linares-Pellicer, J.; Izquierdo-Domenech, J.; Ferri-Molla, I.; Aliaga-Torro, C. We Are All Creators: Generative AI, Collective Knowledge, and the Path Towards Human-AI Synergy. arXiv 2025, arXiv:2504.07936. [Google Scholar] [CrossRef]
  60. Fan, S.; Taylor, M. Will AI Replace Us? Thames and Hudson Ltd.: London, UK, 2019; Available online: https://library.fra.ac.uk/bib/37629 (accessed on 10 September 2025).
  61. Gunkel, D.J. Generative AI and Remix: Difference and Repetition. In The Routledge Companion to Remix Studies, 2nd ed.; Routledge: Oxfordshire, UK, 2025. [Google Scholar]
  62. Günay, M. Artificial Intelligence and Originality in Design. ART/Icle 2025, 4, 449–469. [Google Scholar] [CrossRef]
  63. Orozco, L. Holly Herndon. New Suns. 2023. Available online: https://newsuns.net/holly-herndon-spawning-identities/ (accessed on 10 September 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.