From c06328463e99bcd7d4c963318a3587c2704abc8f Mon Sep 17 00:00:00 2001 From: Leon van Kammen Date: Mon, 11 Sep 2023 11:43:02 +0200 Subject: [PATCH] update documentation --- doc/RFC_XR_Fragments.md | 186 ++++----- doc/RFC_XR_Fragments.xml | 869 --------------------------------------- 2 files changed, 91 insertions(+), 964 deletions(-) diff --git a/doc/RFC_XR_Fragments.md b/doc/RFC_XR_Fragments.md index 1fd560c..1902b8a 100644 --- a/doc/RFC_XR_Fragments.md +++ b/doc/RFC_XR_Fragments.md @@ -291,63 +291,103 @@ sub-delims = "," / "=" We still think and speak in simple text, not in HTML or RDF.
The most advanced human will probably not shout `

FIRE!

` in case of emergency.
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.
-Ideally metadata must come **with** text, but not **obfuscate** the text, or **in another** file.
- -This way: - -1. XR Fragments allows hasslefree spatial tagging, by detecting BibTeX metadata **at the end of content** of text (see default mimetype & Data URI) -2. XR Fragments allows hasslefree spatial tagging, by treating 3D object name/class-pairs as BibTeX tags. -3. XR Fragments allows hasslefree textual tagging, spatial tagging, and supra tagging, by mapping 3D/text object (class)names using BibTeX 'tags' -4. BibTex & Hashtagbibs are the first-choice **requestless metadata**-layer for XR text, HTML/RDF/JSON is great (but fits better in the application-layer) -5. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)). -6. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle)) -7. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle)) - -This allows recursive connections between text itself, as well as 3D objects and vice versa, using **BibTags** : +Ideally metadata must come **with** text, but not **obfuscate** the text, or **spawning another request** to fetch it.
``` - http://y.io/z.fbx | (Evaluated) BibTex/ 'wires' / tags | - ----------------------------------------------------------------------------+------------------------------------- +Spectrum of speak/scan/write/listen/keyboard-friendly 'tagging' notations: + + (just # and @) (string only) (obuscated text) (type-aware text) + + <---- Bibs ---------- BibTeX ---------- XML / HTML --------- JSON / YAML / RDF --------> + +``` + +Hence: + +1. XR Fragments promotes the importance of hasslefree plain text and string-based patternmatching +2. XR Fragments allows hasslefree spatial tagging, by detecting metadata **at the end of content** of text (see default mimetype & Data URI) +3. XR Fragments allows hasslefree spatial tagging, by treating 3D object name/class-pairs as BibTeX tags. +4. XR Fragments allows hasslefree textual tagging, spatial tagging, and supra tagging, by mapping 3D/text object (class)names using BibTeX 'tags' +5. Appending plain text with **requestless metadata** (microformats) is the first class citizen for XR text (HTML/RDF/JSON is great, but fits better in the application-layer) +6. string-only, typeless (polyglot) microformats are first-class citizen metadata, since they are easy to edit/add by humans +7. BibTex and [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) are first-class citizens for adding/describing relationships spatially. +8. Opening tags for metadata (`#`, `@`, `{`, or `<`) should always start at the beginning of the line. +This allows recursive connections between text itself, as well as 3D objects and vice versa.
+ +Here's an example by expanding polyglot metadata to **BibTeX** associations: + +``` + http://y.io/z.fbx | Derived BibTex / 'wires' & tags + ----------------------------------------------------------------------------+-------------------------------------- | @house{castle, +-[src: data:.....]----------------------+ +-[3D mesh]-+ | url = {https://y.io/z.fbx#castle} - | My Notes | | / \ | | } + | Chapter one | | / \ | | } | | | / \ | | @baroque{castle, - | The houses are built in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle} + | John built houses in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle} | | | |_____| | | } - | @house{baroque, | +-----│-----+ | @house{baroque, - | description = {classic} | ├─ name: castle | description = {classic} - | } | └─ class: house baroque | } - +----------------------------------------+ | @house{contactowner, - | } - +-[remotestorage.io / localstorage]------+ | @todo{contactowner, - | #contactowner@todo@house | | } - | ... | | - +----------------------------------------+ | + | #john@baroque | +-----│-----+ | @baroque{john} + | | │ | + | | ├─ name: castle | + | | └─ class: house baroque | + +----------------------------------------+ | + [3D mesh ] | + +-[remotestorage.io / localstorage]------+ | O + name: john | + | #contactjohn@todo@house | | /|\ | | + | ... | | / \ | | + +----------------------------------------+ +--------+ | ``` -BibTex (generated from 3D objects), can be extended by the enduser with personal BiBTex or [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs). +A (rare) example of polyglot tags: -> [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) allows the enduser to add 'postit' connections (compressed BibTex) by speaking/typing/scanning text, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: `https://y.io/z.fbx#@baroque@todo` e.g. +``` + http://y.io/z.fbx | Derived BibTex / 'wires' & tags + ----------------------------------------------------------------------------+-------------------------------------- + | @house{castle, + +-[src: data:.....]----------------------+ +-[3D mesh]-+ | url = {https://y.io/z.fbx#castle} + | Chapter one | | / \ | | } + | | | / \ | | @baroque{castle, + | John built houses in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle} + | | | |_____| | | } + | #john@baroque | +-----│-----+ | @baroque{john} + | @house{baroque, info = {classic}, } | │ | @house{baroque, + | { "tag":"john", "match":"john"} | ├─ name: castle | info = {classic} + | | └─ class: house baroque | } + +----------------------------------------+ | @house{contactjohn} + [3D mesh ] | + +-[remotestorage.io / localstorage]------+ | O + name: john | @todo{contactjohn} + | #contactjohn@todo@house | | /|\ | | + | ... | | / \ | | john{john} + +----------------------------------------+ +--------+ | +``` -Obviously, expressing the relationships above in XML/JSON instead of BibTeX, would cause instant cognitive overload.
-The This allows instant realtime filtering of relationships at various levels: +As seen above, we can extract tags/associations between text & 3D objects, by converting all scene metadata to (in this case) BibTeX, by expanding [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) and interpreting its polyglot tag-notation.
+One huge advantage of polyglot tags is authoring and copy-paste **by humans**, which will be discussed later in this spec.
-| scope | matching algo | +> [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) also allows the enduser to annotate text/objects by **speaking/typing/scanning associations**, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: `https://y.io/z.fbx#@baroque@todo` e.g. + +The Evaluated BiBTeX allows XR Browsers to show/hide relationships in realtime at various levels: + +| scope | tag-matching algo | |---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| textual | text containing 'baroque' is now automatically tagged with 'house' (incl. plaintext `src` child nodes) | -| spatial | spatial object(s) with name `baroque` or `"class":"house"` are now automatically tagged with 'house' (incl. child nodes) | -| supra | text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (current node to root nodes) | -| omni | text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (too node to all nodes) | -| infinite | text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (too node to all nodes) | +| textual | text containing 'baroque' is now automatically tagged with 'house' (incl. plaintext `src` child nodes) | +| spatial | spatial object(s) with name `baroque` or `"class":"house"` are now automatically tagged with 'house' (incl. child nodes) | +| supra | text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (current node to root nodes) | +| omni | text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (root node to all nodes) | +| infinite | text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (root node to all nodes ) | -BibTex allows the enduser to adjust different levels of associations (see [the core principle](#core-principle)): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.
+This allows the enduser to adjust different levels of associations (see [the core principle](#core-principle)): spatial wires can be rendered, words/objects can be highlighted/scaled etc.
-> NOTE: infinite matches both 'baroque' and 'style'-occurences in text, as well as spatial objects with `"class":"style"` or name "baroque". This multiplexing of id/category is deliberate because of [the core principle](#core-principle). +> NOTE: infinite matches both 'baroque' and 'house'-occurences in text, as well as spatial objects with `"class":"house"` or name "baroque". This multiplexing of id/category is deliberate, in order to support [the core principle](#core-principle). -8. The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly) -9. The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime. +9. When moving/copying/pasting metadata, always prefer converting to string-only microformats (BibTex/Bibs) +10. respect multi-line metadata because of [the core principle](#core-principle) +11. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)). +12. anti-pattern: hardcoupling a mandatory **obtrusive markup/scripting-language** or with an XR browser (HTML/VRML/Javascript) (see [the core principle](#core-principle)) +13. anti-pattern: limiting human introspection, by abandoning plain text as first class citizen. +14. The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly) +15. The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime. -> The simplicity of appending BibTeX (and leveling the metadata-playfield between humans and machines) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail. +> The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail. ## Default Data URI mimetype @@ -359,20 +399,21 @@ The XR Fragment specification bumps the traditional default browser-mimetype to a hashtagbib(tex)-friendly one: -`text/plain;charset=utf-8;bib=^@` +`text/plain;charset=utf-8;meta=<#@{` This indicates that: * utf-8 is supported by default -* [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) are expanded to [bibtags](https://en.wikipedia.org/wiki/BibTeX) -* lines matching regex `^@` will automatically get filtered out, in order to: -* links between textual/spatial objects can automatically be detected -* bibtag appendices ([visual-meta](https://visual-meta.info) can be interpreted e.g. +* lines beginning with `<`, `#`, `@` or `{` (regex: `^(<|#|@|{)`) will not be rendered verbatim by default (=Bibs/BibTex/JSON/XML) + +By doing so, the XR Browser (applications-layer) can interpret microformats ([visual-meta](https://visual-meta.info) +to connect text further with its environment ( setup links between textual/spatial objects automatically e.g.). > for more info on this mimetype see [bibs](https://github.com/coderofsalvation/hashtagbibs) Advantages: +* auto-expanding of [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) associations * out-of-the-box (de)multiplex human text and metadata in one go (see [the core principle](#core-principle)) * no network-overhead for metadata (see [the core principle](#core-principle)) * ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios @@ -392,9 +433,9 @@ For all other purposes, regular mimetypes can be used (but are not required by t | │ | | | | ├── ◻ article_canvas | | Hello friends. | | │ └ src: ://author.com/article.txt | | | - | │ | | @friend{friends | + | │ | | { | | └── ◻ note_canvas | | ... | - | └ src:`data:welcome human\n@...` | | } | + | └ src:`data:welcome human\n{...` | | } | | | +------------------------+ | | +--------------------------------------------------------------+ @@ -407,55 +448,9 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)- > additional tagging using [bibs](https://github.com/coderofsalvation/hashtagbibs): to tag spatial object `note_canvas` with 'todo', the enduser can type or speak `@note_canvas@todo` -## Bibs & BibTeX: lowest common denominator for linking data - -> "When a car breaks down, the ones **without** turbosupercharger are easier to fix" - -Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated, hence a great advantage for introspection.
-It's a missing sensemaking precursor to extrospective RDF.
-BibTeX-appendices are already used in the digital AND physical world (academic books, [visual-meta](https://visual-meta.info)), perhaps due to its terseness & simplicity.
-In that sense, it's one step up from the `.ini` fileformat (which has never leaked into the physical world like BibTex): - -1. frictionless copy/pasting (by humans) of (unobtrusive) content AND metadata -1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later - -| characteristic | UTF8 Plain Text (with BibTeX) | RDF | -|------------------------------------|-------------------------------|---------------------------| -| perspective | introspective | extrospective | -| structure | fuzzy (sensemaking) | precise | -| space/scope | local | world | -| everything is text (string) | yes | no | -| voice/paper-friendly | [bibs](https://github.com/coderofsalvation/hashtagbibs) | no | -| leaves (dictated) text intact | yes | no | -| markup language | just an appendix | ~4 different | -| polyglot format | no | yes | -| easy to copy/paste content+metadata| yes | up to application | -| easy to write/repair for layman | yes | depends | -| easy to (de)serialize | yes (fits on A4 paper) | depends | -| infrastructure | selfcontained (plain text) | (semi)networked | -| freeform tagging/annotation | yes, terse | yes, verbose | -| can be appended to text-content | yes | up to application | -| copy-paste text preserves metadata | yes | up to application | -| emoji | yes | depends on encoding | -| predicates | free | semi pre-determined | -| implementation/network overhead | no | depends | -| used in (physical) books/PDF | yes (visual-meta) | no | -| terse non-verb predicates | yes | no | -| nested structures | no (but: BibTex rulers) | yes | - -> To keep XR Fragments a lightweight spec, BibTeX is used for rudimentary text/spatial tagging (not JSON, RDF or a scripting language because they're harder to write/speak/repair.). - -Of course, on an application-level JSON(LD / RDF) can still be used at will, by embedding RDF-urls/data as custom properties (but is not interpreted by this spec). - ## XR Text example parser - -1. The XR Fragments spec does not aim to harden the BiBTeX format -2. respect multi-line BibTex values because of [the core principle](#core-principle) -3. Respect hashtag(bibs) and rulers (like `${visual-meta-start}`) according to the [hashtagbibs spec](https://github.com/coderofsalvation/hashtagbibs) -4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype `text/plain;charset=utf-8;bib=^@` - -Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes: +Here's an example XR Text (de)multiplexer in javascript, which supports inline bibs & bibtex: ``` xrtext = { @@ -488,6 +483,7 @@ xrtext = { if( !(t = t.trim()) ) return if( tag = t.match( pat[1] ) ) tag = tag[0] if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) + if( tag.match( /}$/ ) ) return tags.push({k: tag.replace(/}$/,''), v: {}}) t = t.substr( tag.length ) t.split( pat[2] ) .map( kv => { diff --git a/doc/RFC_XR_Fragments.xml b/doc/RFC_XR_Fragments.xml index 42f3705..e69de29 100644 --- a/doc/RFC_XR_Fragments.xml +++ b/doc/RFC_XR_Fragments.xml @@ -1,869 +0,0 @@ - - - - - -XR Fragments -
-
-Internet -Internet Engineering Task Force - - -This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.
- -The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.
- -XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like URI Fragments and BibTags notation.
-
-Almost every idea in this document is demonstrated at https://xrfragment.org -
- -
- - - -
Introduction -How can we add more features to existing text & 3D scenes, without introducing new dataformats?
- -Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
- -Their lowest common denominator is: (co)authoring using plain text.
- -XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:
-
- -
    -
  1. addressibility and navigation of 3D scenes/objects: URI Fragments + src/href spatial metadata
  2. -
  3. hasslefree tagging across text and spatial objects using bibs / BibTags appendices (see visual-meta e.g.)
  4. -
-
NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible -
- -
Core principle -XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.
- -This also means that the repair-ability of machine-matters should be human friendly too (not too complex).
-
-
"When a car breaks down, the ones without turbosupercharger are easier to fix" -
Let's always focus on average humans: our fuzzy symbolical mind must be served first, before serving a greater categorized typesafe RDF hive mind). -
Humans first, machines (AI) later. -
Thererfore, XR Fragments does not look at XR (or the web) thru the lens of HTML.
- -XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment browsers can be implemented on top of HTML/Javascript.
-
- -
Conventions and Definitions -See appendix below in case certain terms are not clear. -
- -
List of URI Fragments - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
fragmenttypeexampleinfo
#posvector3#pos=0.5,0,0positions camera to xyz-coord 0.5,0,0
#rotvector3#rot=0,90,0rotates camera to xyz-coord 0.5,0,0
#tvector2#t=500,1000sets animation-loop range between frame 500 and 1000
#......string#.cubes #cubeobject(s) of interest (fragment to object name or class mapping)
xyz coordinates are similar to ones found in SVG Media Fragments -
- -
List of metadata for 3D nodes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
keytypeexample (JSON)info
namestring"name": "cube"available in all 3D fileformats & scenes
classstring"class": "cubes"available through custom property in 3D fileformats
hrefstring"href": "b.gltf"available through custom property in 3D fileformats
srcstring"src": "#q=cube"available through custom property in 3D fileformats
Popular compatible 3D fileformats: .gltf, .obj, .fbx, .usdz, .json (THREE.js), .dae and so on. -
NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too. -
- -
Navigating 3D -Here's an ascii representation of a 3D scene-graph which contains 3D objects and their metadata: - - +--------------------------------------------------------+ - | | - | index.gltf | - | │ | - | ├── ◻ buttonA | - | │ └ href: #pos=1,0,1&t=100,200 | - | │ | - | └── ◻ buttonB | - | └ href: other.fbx | <-- file-agnostic (can be .gltf .obj etc) - | | - +--------------------------------------------------------+ - - -An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the buttonA and buttonB.
- -In case of buttonA the end-user will be teleported to another location and time in the current loaded scene, but buttonB will - replace the current scene with a new one, like other.fbx.
-
- -
Embedding 3D content -Here's an ascii representation of a 3D scene-graph with 3D objects which embeds remote & local 3D objects with/out using queries: - - +--------------------------------------------------------+ +-------------------------+ - | | | | - | index.gltf | | ocean.com/aquarium.fbx | - | │ | | │ | - | ├── ◻ canvas | | └── ◻ fishbowl | - | │ └ src: painting.png | | ├─ ◻ bass | - | │ | | └─ ◻ tuna | - | ├── ◻ aquariumcube | | | - | │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+ - | │ | - | ├── ◻ bedroom | - | │ └ src: #q=canvas | - | │ | - | └── ◻ livingroom | - | └ src: #q=canvas | - | | - +--------------------------------------------------------+ - -An XR Fragment-compatible browser viewing this scene, lazy-loads and projects painting.png onto the (plane) object called canvas (which is copy-instanced in the bed and livingroom).
- -Also, after lazy-loading ocean.com/aquarium.gltf, only the queried objects bass and tuna will be instanced inside aquariumcube.
- -Resizing will be happen accordingly to its placeholder object aquariumcube, see chapter Scaling.
-
-
- -
XR Fragment queries -Include, exclude, hide/shows objects using space-separated strings: - -
    -
  • #q=cube
  • -
  • #q=cube -ball_inside_cube
  • -
  • #q=* -sky
  • -
  • #q=-.language .english
  • -
  • #q=cube&rot=0,90,0
  • -
  • #q=price:>2 price:<5
  • -
-It's simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling: - -
    -
  1. queries are showing/hiding objects only when defined as src value (prevents sharing of scene-tampered URL's).
  2. -
  3. queries are highlighting objects when defined in the top-Level (browser) URL (bar).
  4. -
  5. search words like cube and foo in #q=cube foo are matched against 3D object names or custom metadata-key(values)
  6. -
  7. search words like cube and foo in #q=cube foo are matched against tags (BibTeX) inside plaintext src values like @cube{redcube, ... e.g.
  8. -
  9. # equals #q=*
  10. -
  11. words starting with . like .german match class-metadata of 3D objects like "class":"german"
  12. -
  13. words starting with . like .german match class-metadata of (BibTeX) tags in XR Text objects like @german{KarlHeinz, ... e.g.
  14. -
-
For example: #q=.foo is a shorthand for #q=class:foo, which will select objects with custom property class:foo. Just a simple #q=cube will simply select an object named cube. -
-
    -
  • see an example video here
  • -
- -
including/excluding - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
operatorinfo
*select all objects (only useful in src custom property)
-removes/hides object(s)
:indicates an object-embedded custom property key/value
.alias for "class" :".foo" equals class:foo
> <compare float or int number
/reference to root-scene.
-Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by src) (*)
* = #q=-/cube hides object cube only in the root-scene (not nested cube objects)
- #q=-cube hides both object cube in the root-scene <b>AND</b> nested skybox objects |
-
» example implementation -» example 3D asset -» discussion -
- -
Query Parser -Here's how to write a query parser: - -
    -
  1. create an associative array/object to store query-arguments as objects
  2. -
  3. detect object id's & properties foo:1 and foo (reference regex: /^.*:[><=!]?/ )
  4. -
  5. detect excluders like -foo,-foo:1,-.foo,-/foo (reference regex: /^-/ )
  6. -
  7. detect root selectors like /foo (reference regex: /^[-]?\// )
  8. -
  9. detect class selectors like .foo (reference regex: /^[-]?class$/ )
  10. -
  11. detect number values like foo:1 (reference regex: /^[0-9\.]+$/ )
  12. -
  13. expand aliases like .foo into class:foo
  14. -
  15. for every query token split string on :
  16. -
  17. create an empty array rules
  18. -
  19. then strip key-operator: convert "-foo" into "foo"
  20. -
  21. add operator and value to rule-array
  22. -
  23. therefore we we set id to true or false (false=excluder -)
  24. -
  25. and we set root to true or false (true=/ root selector is present)
  26. -
  27. we convert key '/foo' into 'foo'
  28. -
  29. finally we add the key/value to the store like store.foo = {id:false,root:true} e.g.
  30. -
-
An example query-parser (which compiles to many languages) can be found here -
- -
XR Fragment URI Grammar - -reserved = gen-delims / sub-delims -gen-delims = "#" / "&" -sub-delims = "," / "=" - -
Example: ://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100 -
- - - - - - - - - - - - - - - - - - -
DemoExplanation
pos=1,2,3vector/coordinate argument e.g.
pos=1,2,3&rot=0,90,0&q=.foocombinators
-
- -
Text in XR (tagging,linking to spatial objects) -We still think and speak in simple text, not in HTML or RDF.
- -The most advanced human will probably not shout <h1>FIRE!</h1> in case of emergency.
- -Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.
- -Ideally metadata must come with text, but not obfuscate the text, or in another file.
-
-This way: - -
    -
  1. XR Fragments allows <b id="tagging-text">hasslefree spatial tagging</b>, by detecting BibTeX metadata at the end of content of text (see default mimetype & Data URI)
  2. -
  3. XR Fragments allows <b id="tagging-objects">hasslefree spatial tagging</b>, by treating 3D object name/class-pairs as BibTeX tags.
  4. -
  5. XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX 'tags'
  6. -
  7. BibTex & Hashtagbibs are the first-choice requestless metadata-layer for XR text, HTML/RDF/JSON is great (but fits better in the application-layer)
  8. -
  9. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see the core principle).
  10. -
  11. anti-pattern: hardcoupling a mandatory obtrusive markuplanguage or framework with an XR browsers (HTML/VRML/Javascript) (see the core principle)
  12. -
  13. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see the core principle)
  14. -
-This allows recursive connections between text itself, as well as 3D objects and vice versa, using BibTags : - - http://y.io/z.fbx | (Evaluated) BibTex/ 'wires' / tags | - ----------------------------------------------------------------------------+------------------------------------- - | @house{castle, - +-[src: data:.....]----------------------+ +-[3D mesh]-+ | url = {https://y.io/z.fbx#castle} - | My Notes | | / \ | | } - | | | / \ | | @baroque{castle, - | The houses are built in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle} - | | | |_____| | | } - | @house{baroque, | +-----│-----+ | @house{baroque, - | description = {classic} | ├─ name: castle | description = {classic} - | } | └─ class: house baroque | } - +----------------------------------------+ | @house{contactowner, - | } - +-[remotestorage.io / localstorage]------+ | @todo{contactowner, - | #contactowner@todo@house | | } - | ... | | - +----------------------------------------+ | - -BibTex (generated from 3D objects), can be extended by the enduser with personal BiBTex or hashtagbibs. -
hashtagbibs allows the enduser to add 'postit' connections (compressed BibTex) by speaking/typing/scanning text, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: https://y.io/z.fbx#@baroque@todo e.g. -
Obviously, expressing the relationships above in XML/JSON instead of BibTeX, would cause instant cognitive overload.
- -The This allows instant realtime filtering of relationships at various levels:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
scopematching algo
<b id="textual-tagging">textual</b>text containing 'baroque' is now automatically tagged with 'house' (incl. plaintext src child nodes)
<b id="spatial-tagging">spatial</b>spatial object(s) with name baroque or "class":"house" are now automatically tagged with 'house' (incl. child nodes)
<b id="supra-tagging">supra</b>text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (current node to root nodes)
<b id="omni-tagging">omni</b>text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (too node to all nodes)
<b id="infinite-tagging">infinite</b>text- or spatial-object(s) (non-descendant nodes) elsewhere, (class)named 'baroque' or 'house', are automatically tagged with 'house' (too node to all nodes)
BibTex allows the enduser to adjust different levels of associations (see the core principle): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.
-
-
NOTE: infinite matches both 'baroque' and 'style'-occurences in text, as well as spatial objects with "class":"style" or name "baroque". This multiplexing of id/category is deliberate because of the core principle. -
-
    -
  1. The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)
  2. -
  3. The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
  4. -
-
The simplicity of appending BibTeX (and leveling the metadata-playfield between humans and machines) is also demonstrated by visual-meta in greater detail. -
-
Default Data URI mimetype -The src-values work as expected (respecting mime-types), however: -The XR Fragment specification bumps the traditional default browser-mimetype -text/plain;charset=US-ASCII -to a hashtagbib(tex)-friendly one: -text/plain;charset=utf-8;bib=^@ -This indicates that: - -
    -
  • utf-8 is supported by default
  • -
  • hashtagbibs are expanded to bibtags
  • -
  • lines matching regex ^@ will automatically get filtered out, in order to:
  • -
  • links between textual/spatial objects can automatically be detected
  • -
  • bibtag appendices (visual-meta can be interpreted e.g.
  • -
-
for more info on this mimetype see bibs -
Advantages: - -
    -
  • out-of-the-box (de)multiplex human text and metadata in one go (see the core principle)
  • -
  • no network-overhead for metadata (see the core principle)
  • -
  • ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios
  • -
  • rich send/receive/copy-paste everywhere by default, metadata being retained (see the core principle)
  • -
  • netto result: less webservices, therefore less servers, and overall better FPS in XR
  • -
-
This significantly expands expressiveness and portability of human tagged text, by postponing machine-concerns to the end of the human text in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.). -
For all other purposes, regular mimetypes can be used (but are not required by the spec).
-
-
- -
URL and Data URI - - +--------------------------------------------------------------+ +------------------------+ - | | | author.com/article.txt | - | index.gltf | +------------------------+ - | │ | | | - | ├── ◻ article_canvas | | Hello friends. | - | │ └ src: ://author.com/article.txt | | | - | │ | | @friend{friends | - | └── ◻ note_canvas | | ... | - | └ src:`data:welcome human\n@...` | | } | - | | +------------------------+ - | | - +--------------------------------------------------------------+ - -The enduser will only see welcome human and Hello friends rendered spatially (see mimetype). -The beauty is that text in Data URI automatically promotes rich copy-paste (retaining metadata). -In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas'). -The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.). -
additional tagging using bibs: to tag spatial object note_canvas with 'todo', the enduser can type or speak @note_canvas@todo -
- -
Bibs & BibTeX: lowest common denominator for linking data -
"When a car breaks down, the ones without turbosupercharger are easier to fix" -
Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated, hence a great advantage for introspection.
- -It's a missing sensemaking precursor to extrospective RDF.
- -BibTeX-appendices are already used in the digital AND physical world (academic books, visual-meta), perhaps due to its terseness & simplicity.
- -In that sense, it's one step up from the .ini fileformat (which has never leaked into the physical world like BibTex):
- -
    -
  1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata
  2. -
  3. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later
  4. -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
characteristicUTF8 Plain Text (with BibTeX)RDF
perspectiveintrospectiveextrospective
structurefuzzy (sensemaking)precise
space/scopelocalworld
everything is text (string)yesno
voice/paper-friendlybibsno
leaves (dictated) text intactyesno
markup languagejust an appendix~4 different
polyglot formatnoyes
easy to copy/paste content+metadatayesup to application
easy to write/repair for laymanyesdepends
easy to (de)serializeyes (fits on A4 paper)depends
infrastructureselfcontained (plain text)(semi)networked
freeform tagging/annotationyes, terseyes, verbose
can be appended to text-contentyesup to application
copy-paste text preserves metadatayesup to application
emojiyesdepends on encoding
predicatesfreesemi pre-determined
implementation/network overheadnodepends
used in (physical) books/PDFyes (visual-meta)no
terse non-verb predicatesyesno
nested structuresno (but: BibTex rulers)yes
To keep XR Fragments a lightweight spec, BibTeX is used for rudimentary text/spatial tagging (not JSON, RDF or a scripting language because they're harder to write/speak/repair.). -
Of course, on an application-level JSON(LD / RDF) can still be used at will, by embedding RDF-urls/data as custom properties (but is not interpreted by this spec). -
- -
XR Text example parser - -
    -
  1. The XR Fragments spec does not aim to harden the BiBTeX format
  2. -
  3. respect multi-line BibTex values because of the core principle
  4. -
  5. Respect hashtag(bibs) and rulers (like ${visual-meta-start}) according to the hashtagbibs spec
  6. -
  7. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype text/plain;charset=utf-8;bib=^@
  8. -
-Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes: - -xrtext = { - - expandBibs: (text) => { - let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}} - text.replace( bibs.regex , (m,k,v) => { - tok = m.substr(1).split("@") - match = tok.shift() - if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` ) - else if( match.substr(-1) == '#' ) - bibs.tags[match] = `@{${match.replace(/#/,'')}}` - else bibs.tags[match] = `@${match}{${match},\n}` - }) - return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n') - }, - - decode: (str) => { - // bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end - let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] - let tags = [], text='', i=0, prop='' - let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/) - for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ ) - text += lines[i]+'\n' - - bibtex = lines.join('\n').substr( text.length ) - bibtex.split( pat[0] ).map( (t) => { - try{ - let v = {} - if( !(t = t.trim()) ) return - if( tag = t.match( pat[1] ) ) tag = tag[0] - if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) - t = t.substr( tag.length ) - t.split( pat[2] ) - .map( kv => { - if( !(kv = kv.trim()) || kv == "}" ) return - v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) - }) - tags.push( { k:tag, v } ) - }catch(e){ console.error(e) } - }) - return {text, tags} - }, - - encode: (text,tags) => { - let str = text+"\n" - for( let i in tags ){ - let item = tags[i] - if( item.ruler ){ - str += `@${item.ruler}\n` - continue; - } - str += `@${item.k}\n` - for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n` - str += `}\n` - } - return str - } -} - -The above functions (de)multiplexe text/metadata, expands bibs, (de)serialize bibtex (and all fits more or less on one A4 paper) -
above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language. -
-str = ` -hello world -here are some hashtagbibs followed by bibtex: - -#world -#hello@greeting -#another-section# - -@{some-section} -@flap{ - asdf = {23423} -}` - -var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex -tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag -tags.push({ k:'bar{', v:{abc:123} }) // add tag -console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together - -This expands to the following (hidden by default) BibTex appendix: - -hello world -here are some hashtagbibs followed by bibtex: - -@{some-section} -@flap{ - asdf = {1} -} -@world{world, -} -@greeting{hello, -} -@{another-section} -@bar{ - abc = {123} -} - -
when an XR browser updates the human text, a quick scan for nonmatching tags (@book{nonmatchingbook e.g.) should be performed and prompt the enduser for deleting them. -
-
- -
HYPER copy/paste -The previous example, offers something exciting compared to simple copy/paste of 3D objects or text. -XR Text according to the XR Fragment spec, allows HYPER-copy/paste: time, space and text interlinked. -Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways: - -
    -
  1. time/space: 3D object (current animation-loop)
  2. -
  3. text: TeXt object (including BibTeX/visual-meta if any)
  4. -
  5. interlinked: Collected objects by visual-meta tag
  6. -
-
- -
Security Considerations -Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can : - -
    -
  • filter out sensitive data when copy/pasting (XR text with class:secret e.g.)
  • -
-
- -
IANA Considerations -This document has no IANA actions. -
- -
Acknowledgments - -
    -
  • NLNET
  • -
  • Future of Text
  • -
  • visual-meta.info
  • -
-
- -
Appendix: Definitions - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
definitionexplanation
humana sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)
scenea (local/remote) 3D scene or 3D file (index.gltf e.g.)
3D objectan object inside a scene characterized by vertex-, face- and customproperty data.
metadatacustom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)
XR fragmentURI Fragment with spatial hints like #pos=0,0,0&t=1,100 e.g.
src(HTML-piggybacked) metadata of a 3D object which instances content
href(HTML-piggybacked) metadata of a 3D object which links to content
queryan URI Fragment-operator which queries object(s) from a scene like #q=cube
visual-metavisual-meta data appended to text/books/papers which is indirectly visible/editable in XR.
requestless metadatametadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games)
FPSframes per second in spatial experiences (games,VR,AR e.g.), should be as high as possible
introspectiveinward sensemaking ("I feel this belongs to that")
extrospectiveoutward sensemaking ("I'm fairly sure John is a person who lives in oklahoma")
ascii representation of an 3D object/mesh
(un)obtrusiveobtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words
BibTeXsimple tagging/citing/referencing standard for plaintext
BibTaga BibTeX tag
(hashtag)bibsan easy to speak/type/scan tagging SDL (see here
- -
- -