Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = tokenisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1153 KB  
Article
AI-Powered Buy-Now-Pay-Later Smart Contracts in Healthcare
by Ângela Filipa Oliveira Gonçalves, Shafik Faruc Norali and Clemens Bechter
FinTech 2025, 4(2), 24; https://doi.org/10.3390/fintech4020024 - 11 Jun 2025
Viewed by 2602
Abstract
As healthcare systems face mounting pressure to modernise payment infrastructure, fintech innovations have emerged as potential tools to improve affordability and efficiency. However, the adoption of these technologies in clinical settings remains limited. This study investigated the perceptions and resistance patterns of healthcare [...] Read more.
As healthcare systems face mounting pressure to modernise payment infrastructure, fintech innovations have emerged as potential tools to improve affordability and efficiency. However, the adoption of these technologies in clinical settings remains limited. This study investigated the perceptions and resistance patterns of healthcare professionals toward Buy-Now-Pay-Later technology and blockchain in healthcare finance, using Innovation Resistance Theory as the guiding framework. Survey data collected from medical practitioners (N = 366) were analysed to identify knowledge gaps, perceived risks, and tradition-related barriers that influence adoption intent. The findings reveal that while interest in financial innovation exists, resistance is driven by institutional conservatism, regulatory uncertainty, and limited familiarity with decentralised finance systems. This research contributes to the literature by offering a theory-based explanation for why even high-potential financial tools face behavioural and structural resistance in healthcare environments. Full article
Show Figures

Graphical abstract

18 pages, 1662 KB  
Article
PatchCTG: A Patch Cardiotocography Transformer for Antepartum Fetal Health Monitoring
by M. Jaleed Khan, Manu Vatish and Gabriel Davis Jones
Sensors 2025, 25(9), 2650; https://doi.org/10.3390/s25092650 - 22 Apr 2025
Viewed by 1373
Abstract
Antepartum Cardiotocography (CTG) is a biomedical sensing technology widely used for fetal health monitoring. While the visual interpretation of CTG traces is highly subjective, with the inter-observer agreement as low as 29% and a false positive rate of approximately 60%, the Dawes–Redman system [...] Read more.
Antepartum Cardiotocography (CTG) is a biomedical sensing technology widely used for fetal health monitoring. While the visual interpretation of CTG traces is highly subjective, with the inter-observer agreement as low as 29% and a false positive rate of approximately 60%, the Dawes–Redman system provides an automated approach to fetal well-being assessments. However, it is primarily designed to rule out adverse outcomes rather than detect them, resulting in a high specificity (90.7%) but low sensitivity (18.2%) in identifying fetal distress. This paper introduces PatchCTG, an AI-enabled biomedical time series transformer for CTG analysis. It employs patch-based tokenisation, instance normalisation, and channel-independent processing to capture essential local and global temporal dependencies within CTG signals. PatchCTG was evaluated on the Oxford Maternity (OXMAT) dataset, which comprises over 20,000 high-quality CTG traces from diverse clinical outcomes, after applying the inclusion and exclusion criteria. With extensive hyperparameter optimisation, PatchCTG achieved an AUC of 0.77, with a specificity of 88% and sensitivity of 57% at Youden’s index threshold, demonstrating its adaptability to various clinical needs. Its robust performance across varying temporal thresholds highlights its potential for both real-time and retrospective analysis in sensor-driven fetal monitoring. Testing across varying temporal thresholds showcased it robust predictive performance, particularly with finetuning on data closer to delivery, achieving a sensitivity of 52% and specificity of 88% for near-delivery cases. These findings suggest the potential of PatchCTG to enhance clinical decision-making in antepartum care by providing a sensor-based, AI-driven, objective tool for reliable fetal health assessment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6998 KB  
Article
VBCNet: A Hybird Network for Human Activity Recognition
by Fei Ge, Zhenyang Dai, Zhimin Yang, Fei Wu and Liansheng Tan
Sensors 2024, 24(23), 7793; https://doi.org/10.3390/s24237793 - 5 Dec 2024
Cited by 2 | Viewed by 1090
Abstract
In recent years, the research on human activity recognition based on channel state information (CSI) of Wi-Fi has gradually attracted much attention in order to avoid the deployment of additional devices and reduce the risk of personal privacy leakage. In this paper, we [...] Read more.
In recent years, the research on human activity recognition based on channel state information (CSI) of Wi-Fi has gradually attracted much attention in order to avoid the deployment of additional devices and reduce the risk of personal privacy leakage. In this paper, we propose a hybrid network architecture, named VBCNet, that can effectively identify human activity postures. Firstly, we extract CSI sequences from each antenna of Wi-Fi signals, and the data are preprocessed and tokenised. Then, in the encoder part of the model, we introduce a layer of long short-term memory network to further extract the temporal features in the sequences and enhance the ability of the model to capture the temporal information. Meanwhile, VBCNet employs a convolutional feed-forward network instead of the traditional feed-forward network to enhance the model’s ability to process local and multi-scale features. Finally, the model classifies the extracted features into human behaviours through a classification layer. To validate the effectiveness of VBCNet, we conducted experimental evaluations on the classical human activity recognition datasets UT-HAR and Widar3.0 and achieved an accuracy of 98.65% and 77.92%. These results show that VBCNet exhibits extremely high effectiveness and robustness in human activity recognition tasks in complex scenarios. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

31 pages, 4756 KB  
Article
Blockchain Technology Application Domains along the E-Commerce Value Chain—A Qualitative Content Analysis of News Articles
by Josepha Witt and Mareike Schoop
Blockchains 2024, 2(3), 234-264; https://doi.org/10.3390/blockchains2030012 - 12 Jul 2024
Viewed by 4850
Abstract
Blockchain Technology (BCT) offers several possible applications in the field of electronic commerce (e-commerce), such as decentralised marketplaces or payments in cryptocurrencies. Even though these applications of BCT have already been explored in the academic literature, a comprehensive collection along the whole e-commerce [...] Read more.
Blockchain Technology (BCT) offers several possible applications in the field of electronic commerce (e-commerce), such as decentralised marketplaces or payments in cryptocurrencies. Even though these applications of BCT have already been explored in the academic literature, a comprehensive collection along the whole e-commerce value chain is still missing. Furthermore, the existing comprehensive reviews are based on the academic literature whilst the evolution and further development of BCT is highly driven by practitioners. Therefore, we aim to understand how and why BCT is used in e-commerce based on a qualitative content analysis of news articles, i.e., we apply scientific methods to content which reports the latest developments in the field. As a result, we describe the multiple application domains of BCT along the e-commerce value chain. Subsequently, we discuss the main underlying principles of BCT usage across all the value chain steps. Full article
(This article belongs to the Special Issue Feature Papers in Blockchains)
Show Figures

Figure 1

23 pages, 521 KB  
Review
From NFT 1.0 to NFT 2.0: A Review of the Evolution of Non-Fungible Tokens
by Barbara Guidi and Andrea Michienzi
Future Internet 2023, 15(6), 189; https://doi.org/10.3390/fi15060189 - 24 May 2023
Cited by 40 | Viewed by 7095
Abstract
Non-fungible tokens (NFT) represent one of the most important technologies in the space of Web3. Thanks to NFTs, digital or physical assets can be tokenised to represent their ownership through the usage of smart contracts and blockchains. The first generation of this technology, [...] Read more.
Non-fungible tokens (NFT) represent one of the most important technologies in the space of Web3. Thanks to NFTs, digital or physical assets can be tokenised to represent their ownership through the usage of smart contracts and blockchains. The first generation of this technology, called NFT 1.0, considers static tokens described by a set of metadata that cannot be changed after token creation. The static nature prevents their wide spread as they do not support any meaningful user interaction. For this reason, its evolution, called NFT 2.0, has been proposed to make tokens interactive and dynamic and enhance user experience, opening the possibility to use NFTs in more ways and scenarios. The purpose of this article is to review the transition from NFT 1.0 to NFT 2.0, focusing on the newly introduced properties and features and the rising challenges. In particular, we discuss the technical aspects of blockchain technology and its impact on NFTs. We provide a detailed description of NFT properties and standards on various blockchains and discuss the support of the most important blockchains for NFTs. Then, we discuss the properties and features introduced by NFT 2.0 and detail the technical challenges related to metadata and dynamism. Lastly, we conclude by highlighting the new application scenarios opened by NFT 2.0. This review paper serves as a solid base for future research on the topic as it highlights the current technological challenges that must be addressed to help a wide adoption of NFTs 2.0. Full article
(This article belongs to the Special Issue Blockchain Security and Privacy II)
Show Figures

Figure 1

29 pages, 6073 KB  
Article
Blockchain-Based Traceability Architecture for Mapping Object-Related Supply Chain Events
by Fabian Dietrich, Louis Louw and Daniel Palm
Sensors 2023, 23(3), 1410; https://doi.org/10.3390/s23031410 - 27 Jan 2023
Cited by 18 | Viewed by 5692
Abstract
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability of ensuring a secure, transparent, and immutable environment without relying on a trusted third party, the [...] Read more.
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability of ensuring a secure, transparent, and immutable environment without relying on a trusted third party, the emerging blockchain technology shows strong potential to enable end-to-end traceability in such complex multitiered supply networks. This paper aims to overcome the limitations of existing blockchain-based traceability architectures regarding their object-related event mapping ability, which involves mapping the creation and deletion of objects, their aggregation and disaggregation, transformation, and transaction, in one holistic architecture. Therefore, this paper proposes a novel ‘blueprint-based’ token concept, which allows clients to group tokens into different types, where tokens of the same type are non-fungible. Furthermore, blueprints can include minting conditions, which, for example, are necessary when mapping assembly processes. In addition, the token concept contains logic for reflecting all conducted object-related events in an integrated token history. Finally, for validation purposes, this article implements the architecture’s components in code and proves its applicability based on the Ethereum blockchain. As a result, the proposed blockchain-based traceability architecture covers all object-related supply chain events and proves its general-purpose end-to-end traceability capabilities of object flows. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

22 pages, 2289 KB  
Article
Smart or Intelligent Assets or Infrastructure: Technology with a Purpose
by Will Serrano
Buildings 2023, 13(1), 131; https://doi.org/10.3390/buildings13010131 - 4 Jan 2023
Cited by 13 | Viewed by 5024
Abstract
Smart or intelligent built assets including infrastructure, buildings, real estate, and cities provide enhanced functionality to their different users such as occupiers, passengers, consumers, patients, managers or operators. This enhanced functionality enabled by the Internet of Things (IoT), Artificial Intelligence (AI), Big Data, [...] Read more.
Smart or intelligent built assets including infrastructure, buildings, real estate, and cities provide enhanced functionality to their different users such as occupiers, passengers, consumers, patients, managers or operators. This enhanced functionality enabled by the Internet of Things (IoT), Artificial Intelligence (AI), Big Data, Mobile Apps, Virtual Reality (VR) and 5G does not only translate into a superior user experience; technology also supports sustainability and energy consumption to meet regulation (ESG, NZC) while optimising asset management and operations for enhanced business economic performance. The main peculiarity is that technology is standardised, ubiquitous and independent from the physical built assets whereas asset users including humans, machines and devices are also common to different assets. This article analyses the atomic differences between built assets and proposes an asset omni-management model based on micro-management of services that will support the macro-functionality of the asset. The proposed key concept is based on the standardisation of different assets based on common and specific functionality and services delivered by the technology stack that is supporting already the transition to Industry 5.0 based on Web 3.0 and Tokenisation. Full article
Show Figures

Figure 1

20 pages, 4907 KB  
Article
Co-De|GT: The Gamification and Tokenisation of More-Than-Human Qualities and Values
by Marie Davidová, Shanu Sharma, Dermott McMeel and Fernando Loizides
Sustainability 2022, 14(7), 3787; https://doi.org/10.3390/su14073787 - 23 Mar 2022
Cited by 16 | Viewed by 4896
Abstract
The article explores how the quality of life within a deprived urban environment might be improved through the ‘gamification’ of and interaction with, more-than-human elements within the environment. It argues that such quality may be achieved through the community’s multicentered value from the [...] Read more.
The article explores how the quality of life within a deprived urban environment might be improved through the ‘gamification’ of and interaction with, more-than-human elements within the environment. It argues that such quality may be achieved through the community’s multicentered value from the bottom up. This is shown through the case study of the Co-De|GT urban mobile application that was developed in the Synergetic Landscapes unit through real-life research by design experimental studio teaching. Complimentary experimentation took place during the Relating Systems Thinking and Design 10 symposium in the Co-De|BP workshop, where experts were able to be collocated for interactive real-time data gathering. This application addresses the need for collective action towards more-than-human synergy across an urban ecosystem through gamification, community collaboration and DIY culture. It intends to generate a sustainable, scalable token economy where humans and non-humans play equal roles, earning, trading and being paid for goods and services to test such potentials for future economies underpinned by blockchain. This work diverges from dominant economic models that do not recognise the performance of and the limits to, material extraction from the ecosystem. The current economic model has led to the global financial crisis (GFC). Furthermore, it is based on the unsustainable perpetual consumption of services and goods, which may lead to the untangling and critical failure of the market system globally. Therefore, this work investigates how gamification and tokenization may support a complementary and parallel economic market that sustains and grows urban ecosystems. While the research does not speculate on policy implications, it posits how such markets may ameliorate some of the brittleness apparent in the global economic model. It demonstrates a systemic approach to urban ecosystem performance for the future post-Anthropocene communities and economies. Full article
(This article belongs to the Special Issue Quality as Driver for Sustainable Construction)
Show Figures

Figure 1

23 pages, 3783 KB  
Article
MetaboListem and TABoLiSTM: Two Deep Learning Algorithms for Metabolite Named Entity Recognition
by Cheng S. Yeung, Tim Beck and Joram M. Posma
Metabolites 2022, 12(4), 276; https://doi.org/10.3390/metabo12040276 - 22 Mar 2022
Cited by 10 | Viewed by 3662
Abstract
Reviewing the metabolomics literature is becoming increasingly difficult because of the rapid expansion of relevant journal literature. Text-mining technologies are therefore needed to facilitate more efficient literature reviews. Here we contribute a standardised corpus of full-text publications from metabolomics studies and describe the [...] Read more.
Reviewing the metabolomics literature is becoming increasingly difficult because of the rapid expansion of relevant journal literature. Text-mining technologies are therefore needed to facilitate more efficient literature reviews. Here we contribute a standardised corpus of full-text publications from metabolomics studies and describe the development of two metabolite named entity recognition (NER) methods. These methods are based on Bidirectional Long Short-Term Memory (BiLSTM) networks and each incorporate different transfer learning techniques (for tokenisation and word embedding). Our first model (MetaboListem) follows prior methodology using GloVe word embeddings. Our second model exploits BERT and BioBERT for embedding and is named TABoLiSTM (Transformer-Affixed BiLSTM). The methods are trained on a novel corpus annotated using rule-based methods, and evaluated on manually annotated metabolomics articles. MetaboListem (F1-score 0.890, precision 0.892, recall 0.888) and TABoLiSTM (BioBERT version: F1-score 0.909, precision 0.926, recall 0.893) have achieved state-of-the-art performance on metabolite NER. A training corpus with full-text sentences from >1000 full-text Open Access metabolomics publications with 105,335 annotated metabolites was created, as well as a manually annotated test corpus (19,138 annotations). This work demonstrates that deep learning algorithms are capable of identifying metabolite names accurately and efficiently in text. The proposed corpus and NER algorithms can be used for metabolomics text-mining tasks such as information retrieval, document classification and literature-based discovery and are available from the omicsNLP GitHub repository. Full article
(This article belongs to the Special Issue Metabolomics in the Age of Cloud Computing, AI and Machine Learning)
Show Figures

Graphical abstract

39 pages, 794 KB  
Review
A Survey on Text Classification Algorithms: From Text to Predictions
by Andrea Gasparetto, Matteo Marcuzzo, Alessandro Zangari and Andrea Albarelli
Information 2022, 13(2), 83; https://doi.org/10.3390/info13020083 - 11 Feb 2022
Cited by 180 | Viewed by 29984
Abstract
In recent years, the exponential growth of digital documents has been met by rapid progress in text classification techniques. Newly proposed machine learning algorithms leverage the latest advancements in deep learning methods, allowing for the automatic extraction of expressive features. The swift development [...] Read more.
In recent years, the exponential growth of digital documents has been met by rapid progress in text classification techniques. Newly proposed machine learning algorithms leverage the latest advancements in deep learning methods, allowing for the automatic extraction of expressive features. The swift development of these methods has led to a plethora of strategies to encode natural language into machine-interpretable data. The latest language modelling algorithms are used in conjunction with ad hoc preprocessing procedures, of which the description is often omitted in favour of a more detailed explanation of the classification step. This paper offers a concise review of recent text classification models, with emphasis on the flow of data, from raw text to output labels. We highlight the differences between earlier methods and more recent, deep learning-based methods in both their functioning and in how they transform input data. To give a better perspective on the text classification landscape, we provide an overview of datasets for the English language, as well as supplying instructions for the synthesis of two new multilabel datasets, which we found to be particularly scarce in this setting. Finally, we provide an outline of new experimental results and discuss the open research challenges posed by deep learning-based language models. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3001 KB  
Article
A Text Mining Approach in the Classification of Free-Text Cancer Pathology Reports from the South African National Health Laboratory Services
by Okechinyere J. Achilonu, Victor Olago, Elvira Singh, René M. J. C. Eijkemans, Gideon Nimako and Eustasius Musenge
Information 2021, 12(11), 451; https://doi.org/10.3390/info12110451 - 30 Oct 2021
Cited by 9 | Viewed by 4985
Abstract
A cancer pathology report is a valuable medical document that provides information for clinical management of the patient and evaluation of health care. However, there are variations in the quality of reporting in free-text style formats, ranging from comprehensive to incomplete reporting. Moreover, [...] Read more.
A cancer pathology report is a valuable medical document that provides information for clinical management of the patient and evaluation of health care. However, there are variations in the quality of reporting in free-text style formats, ranging from comprehensive to incomplete reporting. Moreover, the increasing incidence of cancer has generated a high throughput of pathology reports. Hence, manual extraction and classification of information from these reports can be intrinsically complex and resource-intensive. This study aimed to (i) evaluate the quality of over 80,000 breast, colorectal, and prostate cancer free-text pathology reports and (ii) assess the effectiveness of random forest (RF) and variants of support vector machine (SVM) in the classification of reports into benign and malignant classes. The study approach comprises data preprocessing, visualisation, feature selections, text classification, and evaluation of performance metrics. The performance of the classifiers was evaluated across various feature sizes, which were jointly selected by four filter feature selection methods. The feature selection methods identified established clinical terms, which are synonymous with each of the three cancers. Uni-gram tokenisation using the classifiers showed that the predictive power of RF model was consistent across various feature sizes, with overall F-scores of 95.2%, 94.0%, and 95.3% for breast, colorectal, and prostate cancer classification, respectively. The radial SVM achieved better classification performance compared with its linear variant for most of the feature sizes. The classifiers also achieved high precision, recall, and accuracy. This study supports a nationally agreed standard in pathology reporting and the use of text mining for encoding, classifying, and production of high-quality information abstractions for cancer prognosis and research. Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
Show Figures

Figure 1

22 pages, 947 KB  
Article
A Chemical Analysis of Hybrid Economic Systems—Tokens and Money
by Anabele-Linda Pardi and Mario Paolucci
Mathematics 2021, 9(20), 2607; https://doi.org/10.3390/math9202607 - 16 Oct 2021
Cited by 3 | Viewed by 3668
Abstract
With the influence of digital technology in our daily lives continuously growing, we investigate methods with the purpose of assessing the stability, sustainability, and design of systems of token economies that include tokens and conventional currencies. Based on a chemical approach, we model [...] Read more.
With the influence of digital technology in our daily lives continuously growing, we investigate methods with the purpose of assessing the stability, sustainability, and design of systems of token economies that include tokens and conventional currencies. Based on a chemical approach, we model markets with a minimum number of variables and compare the transaction rates, stability, and token design properties at different levels of tokenisation. The kinetic study reveals that in certain conditions, if the price of a product contains both conventional money and tokens, one can treat this combination as one composite currency. The dynamic behaviour of the analysed systems is proven to be dynamically stable for the chosen models. Moreover, by applying the supply and demand law to recalculate the prices of products, the necessity of previous knowledge of certain token attributes—token divisibility and token–money exchange rates—emerges. The chemical framework, along with the analytic methods that we propose, is flexible enough to be adjusted to a variety of conditions and offer valuable information about economic systems. Full article
(This article belongs to the Special Issue Sustainability Issues and Mathematical Models of Digital Technologies)
Show Figures

Figure 1

17 pages, 333 KB  
Article
A Study of Analogical Density in Various Corpora at Various Granularity
by Rashel Fam and Yves Lepage
Information 2021, 12(8), 314; https://doi.org/10.3390/info12080314 - 5 Aug 2021
Cited by 3 | Viewed by 2967
Abstract
In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level [...] Read more.
In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level of form rather than on the level of semantics. Experiments are carried on two different corpora in six European languages known to have various levels of morphological richness. Corpora are tokenised using several tokenisation schemes: character, sub-word and word. For the sub-word tokenisation scheme, we employ two popular sub-word models: unigram language model and byte-pair-encoding. The results show that the corpus with a higher Type-Token Ratio tends to have higher analogical density. We also observe that masking the tokens based on their frequency helps to increase the analogical density. As for the tokenisation scheme, the results show that analogical density decreases from the character to word. However, this is not true when tokens are masked based on their frequencies. We find that tokenising the sentences using sub-word models and masking the least frequent tokens increase analogical density. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

17 pages, 298 KB  
Article
Creating Welsh Language Word Embeddings
by Padraig Corcoran, Geraint Palmer, Laura Arman, Dawn Knight and Irena Spasić
Appl. Sci. 2021, 11(15), 6896; https://doi.org/10.3390/app11156896 - 27 Jul 2021
Cited by 5 | Viewed by 3336
Abstract
Word embeddings are representations of words in a vector space that models semantic relationships between words by means of distance and direction. In this study, we adapted two existing methods, word2vec and fastText, to automatically learn Welsh word embeddings taking into account syntactic [...] Read more.
Word embeddings are representations of words in a vector space that models semantic relationships between words by means of distance and direction. In this study, we adapted two existing methods, word2vec and fastText, to automatically learn Welsh word embeddings taking into account syntactic and morphological idiosyncrasies of this language. These methods exploit the principles of distributional semantics and, therefore, require a large corpus to be trained on. However, Welsh is a minoritised language, hence significantly less Welsh language data are publicly available in comparison to English. Consequently, assembling a sufficiently large text corpus is not a straightforward endeavour. Nonetheless, we compiled a corpus of 92,963,671 words from 11 sources, which represents the largest corpus of Welsh. The relative complexity of Welsh punctuation made the tokenisation of this corpus relatively challenging as punctuation could not be used for boundary detection. We considered several tokenisation methods including one designed specifically for Welsh. To account for rich inflection, we used a method for learning word embeddings that is based on subwords and, therefore, can more effectively relate different surface forms during the training phase. We conducted both qualitative and quantitative evaluation of the resulting word embeddings, which outperformed previously described word embeddings in Welsh as part of larger study including 157 languages. Our study was the first to focus specifically on Welsh word embeddings. Full article
22 pages, 9674 KB  
Article
COLreg: The Tokenised Cross-Species Multicentred Regenerative Region Co-Creation
by Marie Davidová and Kateřina Zímová
Sustainability 2021, 13(12), 6638; https://doi.org/10.3390/su13126638 - 10 Jun 2021
Cited by 6 | Viewed by 3556
Abstract
This article argues that whilst our recent economic models are dependent on the overall ecosystem, they do not reflect this fact. As a result of this, we are facing Anthropocene mass extinction. The paper presents a collaborative regenerative region (COLreg) co-creation and tokenisation, [...] Read more.
This article argues that whilst our recent economic models are dependent on the overall ecosystem, they do not reflect this fact. As a result of this, we are facing Anthropocene mass extinction. The paper presents a collaborative regenerative region (COLreg) co-creation and tokenisation, involving multiple human and non-human, living and non-living stakeholders. It unfolds different stages of multicentred, systemic co-design via collaborative gigamapping. In the first steps, certain stakeholders are present and certain are represented, whilst in the final stages of generative development, all stakeholders, even those who were previously just potential stakeholders, take an active role. The ‘COLreg’ project represents a holistic approach that reflects today’s most burning issues, such as biodiversity decrease, unsustainable food production, unsustainable economic models, and social systems. It combines top-down and bottom-up approaches to co-create to achieve regional social and environmental justice for the coming symbiotic post-Anthropocene era. Full article
Show Figures

Figure 1

Back to TopTop