Previous year's Nominees
The Semantic Perspective Project is an entirely new approach to information and knowledge representation, where information composition is done via “meaningful perspectives“: what the meaning of a concept is, when viewed from the perspective of a different concept, hence the project’s name. This approach is radically different from the mainstream linked data approach and it was born from the assumption that structure-first approaches are inadequate for representing information, because information is inherently structure invariant.
In my opinion the seed of trouble in the present semantic web technologies lies in what appears to be a misdirected concept that semantics is somehow an absolute term. Semantics is meaning, and meaning is highly relative. We can not talk about what something "means" but rather only about what something "means to me", "to you", "to us" as a group, "an alien race from planet X" or "a computer" for that matter. This confusion is probably not accidental. In our globalized world it seems we have forgotten and disregarded all our differences, and we concentrate only on what's common. This neglect however is powerful enough to conceal the fact that subjectivity goes beyond our minor individual differences which fade away in context of a large enough population but it can exist at group level as well, creating an enormous barrier between the individuals of such disjunctive groups. The two such groups, that pertain to the present problem are us, humans and the computers, and to reconcile our differences we have to first understand what things "mean" to each of these groups. This all sounds more epic than needed, but in the new semantic web, we expect computers to give us meaningful responses, with "meaning" here being relative to "us" not to "them". I believe that is truly epic given the different nature of the two groups. Why? Because this is not a simple case of translating from one language to another. All human languages describe the same reality, but in this present case we are talking about different realities. Humans are built to work with information, to us, semantics is a sine-qua-non, it is always there in our thought process, we literally can't get rid of it. Computers on the other hand (and here I refer to the computer software in principle) work with data, which is devoid of what meaning is in human terms. It may have meaning in computer terms, but that is a fundamentally different reality, a fact that the industry does not seem to recognize.
Meaning, pertains strongly to how something influences one (what one can do with something, how something affects one, etc.) and as such, meaning is strongly rooted in ones reality. Here, the problem presented in the previous paragraph materializes, and in my opinion, this is where things went wrong in a practical sense: Our reality, the human one, is a pre-existing reality composed of the physical environment that surrounds us, our individual physical self, our mental self, and so on. Every tool that we created to describe and understand this reality, language, mathematical constructs, and so on, are only transient models of our reality and this is how it should be, because this way they, the models, can change. We can constantly adept them, to suite whatever our subjective need is at any point in time, without affecting reality.
The computer reality is fundamentally different: It does not exist a priori, for the reality of a computer software is composed of the definitions and structures and the operations a computer software can perform with whatever it surrounds it within the cybernetic world. There is nothing more underneath those definitions, so the constructs that we feed into the computer are not models of a reality of the computer but THE reality itself in which they operate. Therefore, changing the models for a computer software will not only change the computer's view of its underlying reality, as in the human case, it will change its reality altogether. To draw an analogy, that would be similar with changing the laws of physics in the human reality. As structure-based constructs, present semantic web definition languages are not immune to this problem either. The industry seems to relentlessly go on about constructing new structures, and expecting for things to eventually come together. In my opinion this is impossible with the present approach because of the incompatible nature of the two. What we need is a different model altogether that has semantics at the center and eliminates the rigidity of the structure-based systems, which is in fact what this project is about.
The information representation system proposed by this project, dubbed SPInDL (Semantic Perspective Information Definition Language) relies as the name says on "semantic perspectives" and takes completely the opposite approach to current representation systems. For a side by side comparison of SPInDL with the mainstream approach please visit (http://www.semanticperspective.org/philosophy/key-differentiators-a-high...) but in a nutshell, besides the terms in the dictionary, SPInDL has only four fundamental elements: two meta types (Concepts and Specifics) and two meta relations (Divergence and Correlation). Concepts represent anything that humans define as a unique, notable thing in their reality: it can be the concept of space, that of rock, a feeling, anything. Specifics are materialization of concepts, such as a specific "rock" or a "tree" to which an individual or a group of individuals assign a well defined reference in their minds. Up to this point, no relation can exist between things in reality. That role belongs to the relations, but in SPInDL relations are part of the reality and not the laws that define the reality (more later). Every information chunk is an arbitrary construction of concepts, specifics, divergences and one correlation that functions like this:
In human reality when we want to alter the significance of a concept into something for which we don't have a concept for, all we can do is to employ another concept to do that. This is what I termed "divergence", basically because it created a divergent view of a concept or specific through the perspective of another concept or specific. As an example, suppose we say "John is fast". In this information byte, we alter the significance of the specific "John", analyzing him from the particular angle of him exhibiting speed. So "John < being < speed", is a perspective diverged from John, where we first diverge the concept of the existence of a trait, "being", then we further diverge the existence of a particular trait "speed". Now that we have the exact perspective we want to analyze John from, we correlate that with another concept, specific, or perspective of either and create an information block: "John < being < speed -> Fast". As each information byte can contain besides the one correlation any number of perspectives, there is really no limit in the number of ways concepts can be diverged and correlated. It can be noticed that there are inconveniences, like the fact that an information can be represented in more than one form, but I believe that is a natural process of a good KR language which allows for infinite variety without having to change the basis. Take language for instance, it is vast, convoluted, allows for paradoxes, yet it works. It works because patterns and common agreement emerges when used on regular basis, not because it is was constructed based on a predefined set of concepts and relations.
A powerful characteristics of this representation system emerges when we start replacing parts of the information block with wild cards: (? > being > speed -> fast or ? > being > speed -> ?). It can be noticed how entire families of things can be linked with other families of things through a specific chain of perspectives, thus giving birth to the concept of "Relation", or we can see families or perspective that can be applied to a certain concept thus giving birth to the concept of "Type". There is more description on the variations of how these wild cards can be applied and what the outcome is in the manuscript, but one can see that "Types" and "Relations" become an analyses tool of the knowledge and not part the knowledge representation system. Any addition or removal of any such tool does not affect the knowledge base. In fact a knowledge base can be analyzed and re-analyzed by generations to come with ever more complex patterns and still it (the Knowledge) would not be affected. I believe this is how a knowledge representation needs to be, simple but to allow for infinite complexity.
For a complete presentation of the system please visit the site and consult the manuscript. Please treat the work not as a finished theory but rather as the beginning of a new approach to the world of semantics and treat it from the perspective of what it could become.
Thank you very much!