Next Issue
Volume 2, June
Previous Issue
Volume 1, December
 
 

Information, Volume 2, Issue 1 (March 2011) – 10 articles , Pages 1-246

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
853 KiB  
Article
Finding Emotional-Laden Resources on the World Wide Web
by Kathrin Knautz, Diane Rasmussen Neal, Stefanie Schmidt, Tobias Siebenlist and Wolfgang G. Stock
Information 2011, 2(1), 217-246; https://doi.org/10.3390/info2010217 - 02 Mar 2011
Cited by 8 | Viewed by 9636
Abstract
Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide [...] Read more.
Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE), is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized) or an inverse-logistical shape (two or three emotions were recognized). Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions) and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions), displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

164 KiB  
Article
Trust, Privacy, and Frame Problems in Social and Business E-Networks, Part 1
by Jeff Buechner
Information 2011, 2(1), 195-216; https://doi.org/10.3390/info2010195 - 01 Mar 2011
Cited by 1 | Viewed by 5875
Abstract
Privacy issues in social and business e-networks are daunting in complexity—private information about oneself might be routed through countless artificial agents. For each such agent, in that context, two questions about trust are raised: Where an agent must access (or store) personal information, [...] Read more.
Privacy issues in social and business e-networks are daunting in complexity—private information about oneself might be routed through countless artificial agents. For each such agent, in that context, two questions about trust are raised: Where an agent must access (or store) personal information, can one trust that artificial agent with that information and, where an agent does not need to either access or store personal information, can one trust that agent not to either access or store that information? It would be an infeasible task for any human being to explicitly determine, for each artificial agent, whether it can be trusted. That is, no human being has the computational resources to make such an explicit determination. There is a well-known class of problems in the artificial intelligence literature, known as frame problems, where explicit solutions to them are computationally infeasible. Human common sense reasoning solves frame problems, though the mechanisms employed are largely unknown. I will argue that the trust relation between two agents (human or artificial) functions, in some respects, is a frame problem solution. That is, a problem is solved without the need for a computationally infeasible explicit solution. This is an aspect of the trust relation that has remained unexplored in the literature. Moreover, there is a formal, iterative structure to agent-agent trust interactions that serves to establish the trust relation non-circularly, to reinforce it, and to “bootstrap” its strength. Full article
(This article belongs to the Special Issue Trust and Privacy in Our Networked World)
752 KiB  
Article
Accuracy in Biological Information Technology Involves Enzymatic Quantum Processing and Entanglement of Decohered Isomers
by Willis Grant Cooper
Information 2011, 2(1), 166-194; https://doi.org/10.3390/info2010166 - 25 Feb 2011
Cited by 6 | Viewed by 7088
Abstract
Genetic specificity information “seen by” the transcriptase is in terms of hydrogen bonded proton states, which initially are metastable amino (–NH2) and, consequently, are subjected to quantum uncertainty limits. This introduces a probability of arrangement, keto-amino → enol-imine, where product [...] Read more.
Genetic specificity information “seen by” the transcriptase is in terms of hydrogen bonded proton states, which initially are metastable amino (–NH2) and, consequently, are subjected to quantum uncertainty limits. This introduces a probability of arrangement, keto-amino → enol-imine, where product protons participate in coupled quantum oscillations at frequencies of ~ 1013 s−1 and are entangled. The enzymatic ket for the four G′-C′ coherent protons is │ψ > = α│+ − + − > + β│+ − − + > + γ│− + + − > + δ│− + − + >. Genetic specificities of superposition states are processed quantum mechanically, in an interval ∆t < < 10−13 s, causing an additional entanglement between coherent protons and transcriptase units. The input qubit at G-C sites causes base substitution, whereas coherent states within A-T sites cause deletion. Initially decohered enol and imine G′ and *C isomers are “entanglement-protected” and participate in Topal-Fresco substitution-replication which, in the 2nd round of growth, reintroduces the metastable keto-amino state. Since experimental lifetimes of metastable keto-amino states at 37 °C are ≥ ~3000 y, approximate quantum methods for small times, t < ~100 y, yield the probability, P(t), of keto-amino → enol-imine as Pρ(t) = ½ (γρ/ħ)2 t2. This approximation introduces a quantum Darwinian evolution model which (a) simulates incidence of cancer data and (b) implies insight into quantum information origins for evolutionary extinction. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

193 KiB  
Article
An Alternative View of Privacy on Facebook
by Christian Fuchs
Information 2011, 2(1), 140-165; https://doi.org/10.3390/info2010140 - 09 Feb 2011
Cited by 61 | Viewed by 22782
Abstract
The predominant analysis of privacy on Facebook focuses on personal information revelation. This paper is critical of this kind of research and introduces an alternative analytical framework for studying privacy on Facebook, social networking sites and web 2.0. This framework is connecting the [...] Read more.
The predominant analysis of privacy on Facebook focuses on personal information revelation. This paper is critical of this kind of research and introduces an alternative analytical framework for studying privacy on Facebook, social networking sites and web 2.0. This framework is connecting the phenomenon of online privacy to the political economy of capitalism—a focus that has thus far been rather neglected in research literature about Internet and web 2.0 privacy. Liberal privacy philosophy tends to ignore the political economy of privacy in capitalism that can mask socio-economic inequality and protect capital and the rich from public accountability. Facebook is in this paper analyzed with the help of an approach, in which privacy for dominant groups, in regard to the ability of keeping wealth and power secret from the public, is seen as problematic, whereas privacy at the bottom of the power pyramid for consumers and normal citizens is seen as a protection from dominant interests. Facebook’s privacy concept is based on an understanding that stresses self-regulation and on an individualistic understanding of privacy. The theoretical analysis of the political economy of privacy on Facebook in this paper is based on the political theories of Karl Marx, Hannah Arendt and Jürgen Habermas. Based on the political economist Dallas Smythe’s concept of audience commodification, the process of prosumer commodification on Facebook is analyzed. The political economy of privacy on Facebook is analyzed with the help of a theory of drives that is grounded in Herbert Marcuse’s interpretation of Sigmund Freud, which allows to analyze Facebook based on the concept of play labor (= the convergence of play and labor). Full article
(This article belongs to the Special Issue Trust and Privacy in Our Networked World)
Show Figures

1175 KiB  
Article
Biological Information—Definitions from a Biological Perspective
by Jan Charles Biro
Information 2011, 2(1), 117-139; https://doi.org/10.3390/info2010117 - 21 Jan 2011
Cited by 5 | Viewed by 8895
Abstract
The objective of this paper is to analyze the properties of information in general and to define biological information in particular. Full article
Show Figures

Graphical abstract

162 KiB  
Review
Information as a Manifestation of Development
by James A. Coffman
Information 2011, 2(1), 102-116; https://doi.org/10.3390/info2010102 - 21 Jan 2011
Cited by 3 | Viewed by 9307
Abstract
Information manifests a reduction in uncertainty or indeterminacy. As such it can emerge in two ways: by measurement, which involves the intentional choices of an observer; or more generally, by development, which involves systemically mutual (‘self-organizing’) processes that break symmetry. The developmental emergence [...] Read more.
Information manifests a reduction in uncertainty or indeterminacy. As such it can emerge in two ways: by measurement, which involves the intentional choices of an observer; or more generally, by development, which involves systemically mutual (‘self-organizing’) processes that break symmetry. The developmental emergence of information is most obvious in ontogeny, but pertains as well to the evolution of ecosystems and abiotic dissipative structures. In this review, a seminal, well-characterized ontogenetic paradigm—the sea urchin embryo—is used to show how cybernetic causality engenders the developmental emergence of biological information at multiple hierarchical levels of organization. The relevance of information theory to developmental genomics is also discussed. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Graphical abstract

368 KiB  
Article
On Quantifying Semantic Information
by Simon D’Alfonso
Information 2011, 2(1), 61-101; https://doi.org/10.3390/info2010061 - 18 Jan 2011
Cited by 24 | Viewed by 8550
Abstract
The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly [...] Read more.
The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly semantic information. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semantic information. Firstly, a couple of approaches to measure truthlikeness are drawn from the literature and explored, with a focus on their applicability to semantic information quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made. Full article
(This article belongs to the Special Issue What Is Information?)
140 KiB  
Article
Information Pluralism and Some Informative Modes of Ignorance
by Erkki Patokorpi
Information 2011, 2(1), 41-60; https://doi.org/10.3390/info2010041 - 17 Jan 2011
Cited by 1 | Viewed by 7472
Abstract
In this paper information concepts will be roughly divided into two categories: The cybernetic and the semiotic-pragmatic. They are further divided into three and four subcategories, respectively. The cybernetic conception of information, which comprises both the mathematical-statistic and the logical-semantic approaches, misses some [...] Read more.
In this paper information concepts will be roughly divided into two categories: The cybernetic and the semiotic-pragmatic. They are further divided into three and four subcategories, respectively. The cybernetic conception of information, which comprises both the mathematical-statistic and the logical-semantic approaches, misses some aspects of information and knowing, that are important in economics and technology studies, among others. The semiotic-pragmatic approach presumes the existence of several modes of being of information, as well as connects certainty and ambiguity to information in a different way from how the cybernetic approach does. These two general approaches to information and knowing are strikingly different, especially in their analysis of ignorance or incomplete knowledge. None of the cybernetic conceptions, and only some conceptions within the semiotic-pragmatic approach, can vindicate the elusive intuition of the potential positive role of ignorance. This comparative, philosophical discussion of the modes of ignorance may be taken as a challenge for cybernetics and computational philosophy to make better sense of incomplete knowledge. Full article
Show Figures

453 KiB  
Article
Empirical Information Metrics for Prediction Power and Experiment Planning
by Christopher Lee
Information 2011, 2(1), 17-40; https://doi.org/10.3390/info2010017 - 11 Jan 2011
Cited by 1 | Viewed by 8490
Abstract
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of [...] Read more.
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target), independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

197 KiB  
Article
Some Forms of Trust
by Willem A. DeVries
Information 2011, 2(1), 1-16; https://doi.org/10.3390/info2010001 - 10 Jan 2011
Cited by 9 | Viewed by 7280
Abstract
Three forms of trust: topic-focused trust, general trust, and personal trust are distinguished. Personal trust is argued to be the most fundamental form of trust, deeply connected with the construction of one’s self. Information technology has posed new problems for us in assessing [...] Read more.
Three forms of trust: topic-focused trust, general trust, and personal trust are distinguished. Personal trust is argued to be the most fundamental form of trust, deeply connected with the construction of one’s self. Information technology has posed new problems for us in assessing and developing appropriate forms of the trust that is central to our personhood. Full article
(This article belongs to the Special Issue Trust and Privacy in Our Networked World)
Previous Issue
Next Issue
Back to TopTop