update documentation

This commit is contained in:
Leon van Kammen 2023-09-04 21:51:01 +02:00
parent 94b89f7a9c
commit 47e9db2b52

View file

@ -103,18 +103,27 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br> How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br>
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br> Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br>
However, thru the lens of authoring their lowest common denominator is still: plain text.<br> However, thru the lens of authoring, their lowest common denominator is still: plain text.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:<br> XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br>
1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata 1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata
1. hasslefree tagging across text and spatial objects using BiBTeX ([visual-meta](https://visual-meta.info) e.g.) 1. hasslefree tagging across text and spatial objects using [BiBTeX](https://en.wikipedia.org/wiki/BibTeX) ([visual-meta](https://visual-meta.info) e.g.)
> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible > NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
# Core principle
XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br>
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)).
# Conventions and Definitions # Conventions and Definitions
|definition | explanation | |definition | explanation |
|----------------------|---------------------------------------------------------------------------------------------------------------------------| |----------------------|-------------------------------------------------------------------------------------------------------------------------------|
|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) | |human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) | |scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. | |3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
@ -124,19 +133,12 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
|href | (HTML-piggybacked) metadata of a 3D object which links to content | |href | (HTML-piggybacked) metadata of a 3D object which links to content |
|query | an URI Fragment-operator which queries object(s) from a scene (`#q=cube`) | |query | an URI Fragment-operator which queries object(s) from a scene (`#q=cube`) |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text which is indirectly visible/editable in XR. | |visual-meta | [visual-meta](https://visual.meta.info) data appended to text which is indirectly visible/editable in XR. |
|requestless metadata | opposite of networked metadata (RDF/HTML request-fanouts easily cause framerate-dropping, hence not used a lot in games). | |requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible | |FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") | |introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") | |extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh | |`◻` | ascii representation of an 3D object/mesh |
# Core principle
XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-to-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br>
> "When a car breaks down, the ones without turbosupercharger are easier to fix"
# List of URI Fragments # List of URI Fragments
| fragment | type | example | info | | fragment | type | example | info |
@ -216,8 +218,8 @@ Resizing will be happen accordingly to its placeholder object (`aquariumcube`),
# Text in XR (tagging,linking to spatial objects) # Text in XR (tagging,linking to spatial objects)
We still think and speak in simple text, not in HTML or RDF.<br> We still think and speak in simple text, not in HTML or RDF.<br>
It would be funny when people would shout `<h1>FIRE!</h1>` in case of emergency.<br> The most advanced human will probably not shout `<h1>FIRE!</h1>` in case of emergency.<br>
Given the myriad of new (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br> Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br>
Ideally metadata must come **later with** text, but not **obfuscate** the text, or **in another** file.<br> Ideally metadata must come **later with** text, but not **obfuscate** the text, or **in another** file.<br>
> Humans first, machines (AI) later. > Humans first, machines (AI) later.
@ -226,7 +228,7 @@ This way:
1. XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata **at the end of content** (like [visual-meta](https://visual.meta.info)). 1. XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata **at the end of content** (like [visual-meta](https://visual.meta.info)).
1. XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names to BibTeX 1. XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names to BibTeX
3. inline BibTeX is the minimum required **requestless metadata**-layer for XR text, RDF/JSON is great but optional (and too verbose for the spec-usecases). 3. inline BibTeX is the minimum required **requestless metadata**-layer for XR text, RDF/JSON is great (but fits better in the application-layer)
5. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)). 5. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)).
6. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle)) 6. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle))
7. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle)) 7. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle))
@ -239,7 +241,7 @@ This allows recursive connections between text itself, as well as 3D objects and
| | | |
| The houses seen here are built in baroque style. | | The houses seen here are built in baroque style. |
| | | |
| @house{houses, <----- XR Fragment triple/tag: tiny & phrase-matching BiBTeX | @house{houses, <----- XR Fragment triple/tag: phrase-matching BiBTeX
| url = {#.house} <------------------- XR Fragment URI | url = {#.house} <------------------- XR Fragment URI
| } | | } |
+--------------------------------------------------+ +--------------------------------------------------+
@ -248,12 +250,12 @@ This allows recursive connections between text itself, as well as 3D objects and
This sets up the following associations in the scene: This sets up the following associations in the scene:
1. <b id="textual-tagging">textual tag</b>: text or spatial-occurences named 'houses' is now automatically tagged with 'house' 1. <b id="textual-tagging">textual tag</b>: text or spatial-occurences named 'houses' is now automatically tagged with 'house'
1. <b id="spatial-tagging">spatial tag</b>: spatial object(s) with class:house (#.house) is now automatically tagged with 'house' 1. <b id="spatial-tagging">spatial tag</b>: spatial object(s) with `"class":"house"` (#.house) are now automatically tagged with 'house'
1. <b id="supra-tagging">supra-tag</b>: text- or spatial-object named 'house' (spatially) elsewhere, is now automatically tagged with 'house' 1. <b id="supra-tagging">supra-tag</b>: text- or spatial-object(s) named 'house' elsewhere, are automatically tagged with 'house'
Spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted, links can be manipulated by the user. This allows spatial wires to be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
> The simplicity of appending BibTeX (humans first, machines later) is demonstrated by [visual-meta](https://visual-meta.info) in greater detail, and makes it perfect for GUI's to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime. > The simplicity of appending BibTeX (humans first, machines later) is demonstrated by [visual-meta](https://visual-meta.info) in greater detail, and makes it perfect for HUDs/GUI's to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
## Default Data URI mimetype ## Default Data URI mimetype
@ -343,31 +345,28 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata 1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata
1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later 1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later
| characteristic | Plain Text (with BibTeX) | RDF | | characteristic | UTF8 Plain Text (with BibTeX) | RDF |
|------------------------------------|-----------------------------|---------------------------| |------------------------------------|-------------------------------|---------------------------|
| perspective | introspective | extrospective | | perspective | introspective | extrospective |
| space/scope | local | world | | space/scope | local | world |
| everything is text (string) | yes | no | | everything is text (string) | yes | no |
| leaves (dictated) text intact | yes | no | | leaves (dictated) text intact | yes | no |
| markup language(s) | no (appendix) | ~4 different | | markup language | just an appendix | ~4 different |
| polyglot format | no | yes | | polyglot format | no | yes |
| easy to copy/paste content+metadata| yes | depends | | easy to copy/paste content+metadata| yes | up to application |
| easy to write/repair | yes | depends | | easy to write/repair for layman | yes | depends |
| easy to parse | yes (fits on A4 paper) | depends | | easy to (de)serialize | yes (fits on A4 paper) | depends |
| infrastructure storage | selfcontained (plain text) | (semi)networked | | infrastructure | selfcontained (plain text) | (semi)networked |
| tagging | yes | yes | | freeform tagging | yes, terse | yes, verbose |
| freeform tagging/notes | yes | depends | | can be appended to text-content | yes | up to application |
| specialized file-type | no | yes | | copy-paste text preserves metadata | yes | up to application |
| copy-paste preserves metadata | yes | depends | | emoji | yes | depends on encoding |
| emoji | yes | depends | | predicates | free | semi pre-determined |
| predicates | free | pre-determined |
| implementation/network overhead | no | depends | | implementation/network overhead | no | depends |
| used in (physical) books/PDF | yes (visual-meta) | no | | used in (physical) books/PDF | yes (visual-meta) | no |
| terse categoryless predicates | yes | no | | terse non-verb predicates | yes | no |
| nested structures | no | yes | | nested structures | no | yes |
> To serve humans first, human 'fuzzy symbolical mind' comes first, and ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)) later.
## XR text (BibTeX) example parser ## XR text (BibTeX) example parser
Here's a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks): Here's a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):