From 9b512f12fd5dfc1a39d31870eb490464c70221d6 Mon Sep 17 00:00:00 2001 From: Leon van Kammen Date: Fri, 8 Sep 2023 17:01:14 +0200 Subject: [PATCH] update documentation --- doc/RFC_XR_Fragments.html | 157 +++++---- doc/RFC_XR_Fragments.md | 6 +- doc/RFC_XR_Fragments.md.bak | 614 +++++++++++++++++++++++++++++++++++ doc/RFC_XR_Fragments.txt | 318 ++++++++++-------- doc/RFC_XR_Fragments.xml | 148 +++++---- doc/RFC_XR_Text_Fragments.md | 204 ------------ 6 files changed, 983 insertions(+), 464 deletions(-) create mode 100644 doc/RFC_XR_Fragments.md.bak delete mode 100644 doc/RFC_XR_Text_Fragments.md diff --git a/doc/RFC_XR_Fragments.html b/doc/RFC_XR_Fragments.html index 5626cc4..fa524c3 100644 --- a/doc/RFC_XR_Fragments.html +++ b/doc/RFC_XR_Fragments.html @@ -97,7 +97,7 @@ XR Fragments allows us to enrich/connect existing dataformats, by recursive use
  1. addressibility and navigation of 3D scenes/objects: URI Fragments + src/href spatial metadata
  2. -
  3. hasslefree tagging across text and spatial objects using bibs / BibTags as appendix (see visual-meta e.g.)
  4. +
  5. hasslefree tagging across text and spatial objects using bibs / BibTags appendices (see visual-meta e.g.)
@@ -417,11 +417,7 @@ sub-delims = "," / "="

We still think and speak in simple text, not in HTML or RDF.
The most advanced human will probably not shout <h1>FIRE!</h1> in case of emergency.
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.
-Ideally metadata must come later with text, but not obfuscate the text, or in another file.

- -
-

Humans first, machines (AI) later (core principle

-
+Ideally metadata must come with text, but not obfuscate the text, or in another file.

This way:

@@ -448,6 +444,10 @@ Ideally metadata must come later with text, but not obf +---------------------------------------------+ +
+

The enduser can add connections by speaking/typing/scanning hashtagbibs which the XR Browser can expand to (hidden) BibTags.

+
+

This allows instant realtime tagging of objects at various scopes:

@@ -505,21 +505,28 @@ The simplicity of appending BibTeX ‘tags’ (humans first, machines la

text/plain;charset=US-ASCII

-

to a green eco-friendly:

+

to a hashtagbib(tex)-friendly one:

text/plain;charset=utf-8;bib=^@

-

This indicates that bibs and bibtags matching regex ^@ will automatically get filtered out, in order to:

+

This indicates that:

    -
  • automatically detect links between textual/spatial objects
  • -
  • detect opiniated bibtag appendices (visual-meta e.g.)
  • +
  • utf-8 is supported by default
  • +
  • hashtagbibs are expanded to bibtags
  • +
  • lines matching regex ^@ will automatically get filtered out, in order to:
  • +
  • links between textual/spatial objects can automatically be detected
  • +
  • bibtag appendices (visual-meta can be interpreted e.g.
-

It’s concept is similar to literate programming, which empower local/remote responses to:

+
+

for more info on this mimetype see bibs

+
+ +

Advantages:

    -
  • (de)multiplex human text and metadata in one go (see the core principle)
  • +
  • out-of-the-box (de)multiplex human text and metadata in one go (see the core principle)
  • no network-overhead for metadata (see the core principle)
  • ensuring high FPS: HTML/RDF historically is too ‘requesty’/‘parsy’ for game studios
  • rich send/receive/copy-paste everywhere by default, metadata being retained (see the core principle)
  • @@ -530,12 +537,7 @@ The simplicity of appending BibTeX ‘tags’ (humans first, machines la

    This significantly expands expressiveness and portability of human tagged text, by postponing machine-concerns to the end of the human text in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).

    -

    For all other purposes, regular mimetypes can be used (but are not required by the spec).
    -To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).

    - -
    -

    Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).

    -
    +

    For all other purposes, regular mimetypes can be used (but are not required by the spec).

    URL and Data URI

    @@ -559,7 +561,7 @@ In both cases, the text gets rendered immediately (onto a plane geometry, hence The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).

    -

    additional tagging using bibs: to tag spatial object note_canvas with ‘todo’, the enduser can type or speak @note_canvas@todo

    +

    additional tagging using bibs: to tag spatial object note_canvas with ‘todo’, the enduser can type or speak @note_canvas@todo

    The mapping between 3D objects and text (src-data) is simple (the :

    @@ -573,8 +575,8 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)- | └── ◻ rentalhouse | | └ class: house <----------------- matches -------+ | └ ◻ note | | - | └ src:`data: todo: call owner | bib | - | @owner@house@todo | ----> expands to @house{owner, + | └ src:`data: todo: call owner | hashtagbib | + | #owner@house@todo | ----> expands to @house{owner, | | bibtex: } | ` | @contact{ +------------------------------------------------+ } @@ -593,7 +595,7 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)-

    “When a car breaks down, the ones without turbosupercharger are easier to fix”

    -

    Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.
    +

    Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated, hence a great advantage for introspection.
    It’s a missing sensemaking precursor to extrospective RDF.
    BibTeX-appendices are already used in the digital AND physical world (academic books, visual-meta), perhaps due to its terseness & simplicity.
    In that sense, it’s one step up from the .ini fileformat (which has never leaked into the physical world like BibTex):

    @@ -639,7 +641,7 @@ In that sense, it’s one step up from the .ini fileformat (whi
- + @@ -741,57 +743,70 @@ In that sense, it’s one step up from the .ini fileformat (whi
voice/paper-friendlybibsbibs no
+
+

To keep XR Fragments a lightweight spec, BibTeX is used for rudimentary text/spatial tagging (not JSON, RDF or a scripting language because they’re harder to write/speak/repair.).

+
+ +

Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).

+

XR Text example parser

  1. The XR Fragments spec does not aim to harden the BiBTeX format
  2. -
  3. However, respect multi-line BibTex values because of the core principle
  4. -
  5. Expand bibs and rulers (like ${visual-meta-start}) according to the tagbibs spec
  6. +
  7. respect multi-line BibTex values because of the core principle
  8. +
  9. Expand hashtag(bibs) and rulers (like ${visual-meta-start}) according to the hashtagbibs spec
  10. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype text/plain;charset=utf-8;bib=^@

Here’s an XR Text (de)multiplexer in javascript, which ticks all the above boxes:

xrtext = {
-    
-  decode: (str) => {
-         // bibtex:     ↓@   ↓<tag|tag{phrase,|{ruler}>  ↓property  ↓end
-         let pat    = [ /@/, /^\S+[,{}]/,                /},/,      /}/ ]
-         let tags   = [], text='', i=0, prop=''
-         var bibs   = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
-         let lines  = str.replace(/\r?\n/g,'\n').split(/\n/)
-         for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
 
-         bibtex = lines.join('\n').substr( text.length )
-         bibtex.replace( bibs.regex , (m,k,v) => {
-             tok   = m.substr(1).split("@")
-             match = tok.shift()            
-             tok.map( (t) => bibs.tags[match] = `@${t}{${match},\n}\n` )
-         })
-         bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '') 
-         bibtex.split( pat[0] ).map( (t) => {
-             try{
-                let v = {}
-                if( !(t = t.trim())         ) return            
-                if( tag = t.match( pat[1] ) ) tag = tag[0]
-                if( tag.match( /^{.*}$/ )   ) return tags.push({ruler:tag})
-                t = t.substr( tag.length )
-                t.split( pat[2] )
-                .map( kv => {
-                  if( !(kv = kv.trim()) || kv == "}" ) return
-                  v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )              
-                })
-                tags.push( { k:tag, v } )
-             }catch(e){ console.error(e) }
-        })
-        return {text, tags}      
+  expandBibs: (text) => { 
+    let bibs   = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}}
+    text.replace( bibs.regex , (m,k,v) => {
+       tok   = m.substr(1).split("@")
+       match = tok.shift()
+       if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` )
+       else if( match.substr(-1) == '#' ) 
+          bibs.tags[match] = `@{${match.replace(/#/,'')}}`
+       else bibs.tags[match] = `@${match}{${match},\n}`
+    })
+    return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n')
   },
     
+  decode: (str) => {
+    // bibtex:     ↓@   ↓<tag|tag{phrase,|{ruler}>  ↓property  ↓end
+    let pat    = [ /@/, /^\S+[,{}]/,                /},/,      /}/ ]
+    let tags   = [], text='', i=0, prop=''
+    let lines  = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/)
+    for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ ) 
+        text += lines[i]+'\n'
+
+    bibtex = lines.join('\n').substr( text.length )
+    bibtex.split( pat[0] ).map( (t) => {
+        try{
+           let v = {}
+           if( !(t = t.trim())         ) return
+           if( tag = t.match( pat[1] ) ) tag = tag[0]
+           if( tag.match( /^{.*}$/ )   ) return tags.push({ruler:tag})
+           t = t.substr( tag.length )
+           t.split( pat[2] )
+           .map( kv => {
+             if( !(kv = kv.trim()) || kv == "}" ) return
+             v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )
+           })
+           tags.push( { k:tag, v } )
+        }catch(e){ console.error(e) }
+    })
+    return {text, tags}
+  },
+
   encode: (text,tags) => {
     let str = text+"\n"
     for( let i in tags ){
       let item = tags[i]
-      if( item.ruler ){ 
+      if( item.ruler ){
           str += `@${item.ruler}\n`
           continue;
       }
@@ -799,7 +814,7 @@ In that sense, it’s one step up from the .ini fileformat (whi
       for( let j in item.v ) str += `  ${j} = {${item.v[j]}}\n`
       str += `}\n`
     }
-    return str 
+    return str
   }
 }
 
@@ -812,8 +827,12 @@ In that sense, it’s one step up from the .ini fileformat (whi
str = `
 hello world
+here are some hashtagbibs followed by bibtex:
+
+#world
+#hello@greeting
+#another-section#
 
-@hello@greeting
 @{some-section}
 @flap{
   asdf = {23423}
@@ -825,22 +844,29 @@ tags.push({ k:'bar{', v:{abc:123} })          // add tag
 console.log( xrtext.encode(text,tags) )       // multiplex text & bibtex back together 
 
-

This outputs:

+

This expands to the following (hidden by default) BibTex appendix:

hello world
+here are some hashtagbibs followed by bibtex:
 
-
-@greeting{hello,
-}
 @{some-section}
 @flap{
   asdf = {1}
 }
+@world{world,
+}
+@greeting{hello,
+}
+@{another-section}
 @bar{
   abc = {123}
 }
 
+
+

when an XR browser updates the human text, a quick scan for nonmatching tags (@book{nonmatchingbook e.g.) should be performed and prompt the enduser for deleting them.

+
+

HYPER copy/paste

The previous example, offers something exciting compared to simple copy/paste of 3D objects or text. @@ -968,6 +994,11 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share BibTag a BibTeX tag + + +(hashtag)bibs +an easy to speak/type/scan tagging SDL (see here + diff --git a/doc/RFC_XR_Fragments.md b/doc/RFC_XR_Fragments.md index 6f5233e..ddfe627 100644 --- a/doc/RFC_XR_Fragments.md +++ b/doc/RFC_XR_Fragments.md @@ -291,9 +291,7 @@ sub-delims = "," / "=" We still think and speak in simple text, not in HTML or RDF.
The most advanced human will probably not shout `

FIRE!

` in case of emergency.
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.
-Ideally metadata must come **later with** text, but not **obfuscate** the text, or **in another** file.
- -> Humans first, machines (AI) later ([core principle](#core-principle) +Ideally metadata must come **with** text, but not **obfuscate** the text, or **in another** file.
This way: @@ -319,7 +317,7 @@ This allows recursive connections between text itself, as well as 3D objects and +---------------------------------------------+ ``` -> The enduser can add connections by speaking/typing/scanning [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) which the XR Browser can expand to BibTags. +> The enduser can add connections by speaking/typing/scanning [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) which the XR Browser can expand to (hidden) BibTags. This allows instant realtime tagging of objects at various scopes: diff --git a/doc/RFC_XR_Fragments.md.bak b/doc/RFC_XR_Fragments.md.bak new file mode 100644 index 0000000..09e996d --- /dev/null +++ b/doc/RFC_XR_Fragments.md.bak @@ -0,0 +1,614 @@ +%%% +Title = "XR Fragments" +area = "Internet" +workgroup = "Internet Engineering Task Force" + +[seriesInfo] +name = "XR-Fragments" +value = "draft-XRFRAGMENTS-leonvankammen-00" +stream = "IETF" +status = "informational" + +date = 2023-04-12T00:00:00Z + +[[author]] +initials="L.R." +surname="van Kammen" +fullname="L.R. van Kammen" + +%%% + + + + + +.# Abstract + +This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.
+The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.
+XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) and BibTags notation.
+ +> Almost every idea in this document is demonstrated at [https://xrfragment.org](https://xrfragment.org) + +{mainmatter} + +# Introduction + +How can we add more features to existing text & 3D scenes, without introducing new dataformats?
+Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
+However, thru the lens of authoring, their lowest common denominator is still: plain text.
+XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:
+ +1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata +1. hasslefree tagging across text and spatial objects using [BibTags](https://en.wikipedia.org/wiki/BibTeX) as appendix (see [visual-meta](https://visual-meta.info) e.g.) + +> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible + +# Core principle + +XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.
+This also means that the repair-ability of machine-matters should be human friendly too (not too complex).
+ +> "When a car breaks down, the ones **without** turbosupercharger are easier to fix" + +Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)). + +> Humans first, machines (AI) later. + +# Conventions and Definitions + +|definition | explanation | +|----------------------|-------------------------------------------------------------------------------------------------------------------------------| +|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) | +|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) | +|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. | +|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) | +|XR fragment | URI Fragment with spatial hints like `#pos=0,0,0&t=1,100` e.g. | +|src | (HTML-piggybacked) metadata of a 3D object which instances content | +|href | (HTML-piggybacked) metadata of a 3D object which links to content | +|query | an URI Fragment-operator which queries object(s) from a scene like `#q=cube` | +|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. | +|requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). | +|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible | +|introspective | inward sensemaking ("I feel this belongs to that") | +|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") | +|`◻` | ascii representation of an 3D object/mesh | +|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words | +|BibTeX | simple tagging/citing/referencing standard for plaintext | +|BibTag | a BibTeX tag | + +# List of URI Fragments + +| fragment | type | example | info | +|--------------|----------|-------------------|-------------------------------------------------------------------| +| `#pos` | vector3 | `#pos=0.5,0,0` | positions camera to xyz-coord 0.5,0,0 | +| `#rot` | vector3 | `#rot=0,90,0` | rotates camera to xyz-coord 0.5,0,0 | +| `#t` | vector2 | `#t=500,1000` | sets animation-loop range between frame 500 and 1000 | +| `#......` | string | `#.cubes` `#cube` | object(s) of interest (fragment to object name or class mapping) | + +> xyz coordinates are similar to ones found in SVG Media Fragments + +# List of metadata for 3D nodes + +| key | type | example (JSON) | info | +|--------------|----------|--------------------|--------------------------------------------------------| +| `name` | string | `"name": "cube"` | available in all 3D fileformats & scenes | +| `class` | string | `"class": "cubes"` | available through custom property in 3D fileformats | +| `href` | string | `"href": "b.gltf"` | available through custom property in 3D fileformats | +| `src` | string | `"src": "#q=cube"` | available through custom property in 3D fileformats | + +Popular compatible 3D fileformats: `.gltf`, `.obj`, `.fbx`, `.usdz`, `.json` (THREEjs), `COLLADA` and so on. + +> NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too. + +# Navigating 3D + +Here's an ascii representation of a 3D scene-graph which contains 3D objects `◻` and their metadata: + +``` + +--------------------------------------------------------+ + | | + | index.gltf | + | │ | + | ├── ◻ buttonA | + | │ └ href: #pos=1,0,1&t=100,200 | + | │ | + | └── ◻ buttonB | + | └ href: other.fbx | <-- file-agnostic (can be .gltf .obj etc) + | | + +--------------------------------------------------------+ + +``` + +An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the `buttonA` and `buttonB`.
+In case of `buttonA` the end-user will be teleported to another location and time in the **current loaded scene**, but `buttonB` will + **replace the current scene** with a new one, like `other.fbx`. + +# Embedding 3D content + +Here's an ascii representation of a 3D scene-graph with 3D objects `◻` which embeds remote & local 3D objects `◻` (without) using queries: + +``` + +--------------------------------------------------------+ +-------------------------+ + | | | | + | index.gltf | | ocean.com/aquarium.fbx | + | │ | | │ | + | ├── ◻ canvas | | └── ◻ fishbowl | + | │ └ src: painting.png | | ├─ ◻ bass | + | │ | | └─ ◻ tuna | + | ├── ◻ aquariumcube | | | + | │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+ + | │ | + | ├── ◻ bedroom | + | │ └ src: #q=canvas | + | │ | + | └── ◻ livingroom | + | └ src: #q=canvas | + | | + +--------------------------------------------------------+ +``` + +An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).
+Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `bass` and `tuna` will be instanced inside `aquariumcube`.
+Resizing will be happen accordingly to its placeholder object `aquariumcube`, see chapter Scaling.
+ +# XR Fragment queries + +Include, exclude, hide/shows objects using space-separated strings: + +* `#q=cube` +* `#q=cube -ball_inside_cube` +* `#q=* -sky` +* `#q=-.language .english` +* `#q=cube&rot=0,90,0` +* `#q=price:>2 price:<5` + +It's simple but powerful syntax which allows css-like class/id-selectors with a searchengine prompt-style feeling: + +1. queries are showing/hiding objects **only** when defined as `src` value (prevents sharing of scene-tampered URL's). +1. queries are highlighting objects when defined in the top-Level (browser) URL (bar). +1. search words like `cube` and `foo` in `#q=cube foo` are matched against 3D object names or custom metadata-key(values) +1. search words like `cube` and `foo` in `#q=cube foo` are matched against tags (BibTeX) inside plaintext `src` values like `@cube{redcube, ...` e.g. +1. `#` equals `#q=*` +1. words starting with `.` like `.german` match class-metadata of 3D objects like `"class":"german"` +1. words starting with `.` like `.german` match class-metadata of (BibTeX) tags in XR Text objects like `@german{KarlHeinz, ...` e.g. + +> **For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`. + +* see [an example video here](https://coderofsalvation.github.io/xrfragment.media/queries.mp4) + +## including/excluding + +| operator | info | +|----------|-------------------------------------------------------------------------------------------------------------------------------| +| `*` | select all objects (only useful in `src` custom property) | +| `-` | removes/hides object(s) | +| `:` | indicates an object-embedded custom property key/value | +| `.` | alias for `"class" :".foo"` equals `class:foo` | +| `>` `<` | compare float or int number | +| `/` | reference to root-scene.
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by `src`) (*) | + +> \* = `#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)
`#q=-cube` hides both object `cube` in the root-scene AND nested `skybox` objects | + +[» example implementation](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js) +[» example 3D asset](https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192) +[» discussion](https://github.com/coderofsalvation/xrfragment/issues/3) + +## Query Parser + +Here's how to write a query parser: + +1. create an associative array/object to store query-arguments as objects +1. detect object id's & properties `foo:1` and `foo` (reference regex: `/^.*:[><=!]?/` ) +1. detect excluders like `-foo`,`-foo:1`,`-.foo`,`-/foo` (reference regex: `/^-/` ) +1. detect root selectors like `/foo` (reference regex: `/^[-]?\//` ) +1. detect class selectors like `.foo` (reference regex: `/^[-]?class$/` ) +1. detect number values like `foo:1` (reference regex: `/^[0-9\.]+$/` ) +1. expand aliases like `.foo` into `class:foo` +1. for every query token split string on `:` +1. create an empty array `rules` +1. then strip key-operator: convert "-foo" into "foo" +1. add operator and value to rule-array +1. therefore we we set `id` to `true` or `false` (false=excluder `-`) +1. and we set `root` to `true` or `false` (true=`/` root selector is present) +1. we convert key '/foo' into 'foo' +1. finally we add the key/value to the store like `store.foo = {id:false,root:true}` e.g. + +> An example query-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx) + +## XR Fragment URI Grammar + +``` +reserved = gen-delims / sub-delims +gen-delims = "#" / "&" +sub-delims = "," / "=" +``` + +> Example: `://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100` + +| Demo | Explanation | +|-------------------------------|---------------------------------| +| `pos=1,2,3` | vector/coordinate argument e.g. | +| `pos=1,2,3&rot=0,90,0&q=.foo` | combinators | + + +# Text in XR (tagging,linking to spatial objects) + +We still think and speak in simple text, not in HTML or RDF.
+The most advanced human will probably not shout `

FIRE!

` in case of emergency.
+Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.
+Ideally metadata must come **later with** text, but not **obfuscate** the text, or **in another** file.
+ +> Humans first, machines (AI) later ([core principle](#core-principle) + +This way: + +1. XR Fragments allows hasslefree XR text tagging, using BibTeX metadata **at the end of content** (like [visual-meta](https://visual.meta.info)). +1. XR Fragments allows hasslefree textual tagging, spatial tagging, and supra tagging, by mapping 3D/text object (class)names using BibTeX 'tags' +1. Bibs/BibTeX-appendices is first-choice **requestless metadata**-layer for XR text, HTML/RDF/JSON is great (but fits better in the application-layer) +1. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)). +1. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle)) +1. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle)) + +This allows recursive connections between text itself, as well as 3D objects and vice versa, using **BibTags** : + +``` + +---------------------------------------------+ +------------------+ + | My Notes | | / \ | + | | | / \ | + | The houses here are built in baroque style. | | /house\ | + | | | |_____| | + | | +---------|--------+ + | @house{houses, >----'house'--------| class/name match? + | url = {#.house} >----'houses'-------` class/name match? + | } | + +---------------------------------------------+ +``` + +> The enduser can add connections by speaking/typing/scanning [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) which the XR Browser can expand to BibTags. + +This allows instant realtime tagging of objects at various scopes: + +| scope | matching algo | +|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| textual | text containing 'houses' is now automatically tagged with 'house' (incl. plaintext `src` child nodes) | +| spatial | spatial object(s) with `"class":"house"` (because of `{#.house}`) are now automatically tagged with 'house' (incl. child nodes) | +| supra | text- or spatial-object(s) (non-descendant nodes) elsewhere, named 'house', are automatically tagged with 'house' (current node to root node) | +| omni | text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house', are automatically tagged with 'house' (too node to all nodes) | +| infinite | text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house' or 'houses', are automatically tagged with 'house' (too node to all nodes) | + +This empowers the enduser spatial expressiveness (see [the core principle](#core-principle)): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.
+The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail. + +1. The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly) +1. The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime. + +> NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with `"class":"house"` or name "house". This multiplexing of id/category is deliberate because of [the core principle](#core-principle). + +## Default Data URI mimetype + +The `src`-values work as expected (respecting mime-types), however: + +The XR Fragment specification bumps the traditional default browser-mimetype + +`text/plain;charset=US-ASCII` + +to a hashtagbib(tex)-friendly one: + +`text/plain;charset=utf-8;bib=^@` + +This indicates that: + +* utf-8 is supported by default +* [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) are expanded to [bibtags](https://en.wikipedia.org/wiki/BibTeX) +* lines matching regex `^@` will automatically get filtered out, in order to: +* links between textual/spatial objects can automatically be detected +* bibtag appendices ([visual-meta](https://visual-meta.info) can be interpreted e.g. + +> for more info on this mimetype see [bibs](https://github.com/coderofsalvation/hashtagbibs) + +Advantages: + +* out-of-the-box (de)multiplex human text and metadata in one go (see [the core principle](#core-principle)) +* no network-overhead for metadata (see [the core principle](#core-principle)) +* ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios +* rich send/receive/copy-paste everywhere by default, metadata being retained (see [the core principle](#core-principle)) +* netto result: less webservices, therefore less servers, and overall better FPS in XR + +> This significantly expands expressiveness and portability of human tagged text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.). + +For all other purposes, regular mimetypes can be used (but are not required by the spec).
+ +## URL and Data URI + +``` + +--------------------------------------------------------------+ +------------------------+ + | | | author.com/article.txt | + | index.gltf | +------------------------+ + | │ | | | + | ├── ◻ article_canvas | | Hello friends. | + | │ └ src: ://author.com/article.txt | | | + | │ | | @friend{friends | + | └── ◻ note_canvas | | ... | + | └ src:`data:welcome human\n@...` | | } | + | | +------------------------+ + | | + +--------------------------------------------------------------+ +``` + +The enduser will only see `welcome human` and `Hello friends` rendered spatially. +The beauty is that text (AND visual-meta) in Data URI promotes rich copy-paste. +In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas'). +The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.). + +> additional tagging using [bibs](https://github.com/coderofsalvation/hashtagbibs): to tag spatial object `note_canvas` with 'todo', the enduser can type or speak `@note_canvas@todo` + +The mapping between 3D objects and text (src-data) is simple (the : + +Example: + +``` + +------------------------------------------------+ + | | + | index.gltf | + | │ | + | └── ◻ rentalhouse | + | └ class: house <----------------- matches -------+ + | └ ◻ note | | + | └ src:`data: todo: call owner | hashtagbib | + | #owner@house@todo | ----> expands to @house{owner, + | | bibtex: } + | ` | @contact{ + +------------------------------------------------+ } +``` + +Bi-directional mapping between 3D object names and/or classnames and text using bibs,BibTags & XR Fragments, allows for rich interlinking between text and 3D objects: + +1. When the user surfs to https://.../index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it. +2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along. + +## Bibs & BibTeX: lowest common denominator for linking data + +> "When a car breaks down, the ones **without** turbosupercharger are easier to fix" + +Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated, hence a great advantage for introspection.
+It's a missing sensemaking precursor to extrospective RDF.
+BibTeX-appendices are already used in the digital AND physical world (academic books, [visual-meta](https://visual-meta.info)), perhaps due to its terseness & simplicity.
+In that sense, it's one step up from the `.ini` fileformat (which has never leaked into the physical world like BibTex): + +1. frictionless copy/pasting (by humans) of (unobtrusive) content AND metadata +1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later + +| characteristic | UTF8 Plain Text (with BibTeX) | RDF | +|------------------------------------|-------------------------------|---------------------------| +| perspective | introspective | extrospective | +| structure | fuzzy (sensemaking) | precise | +| space/scope | local | world | +| everything is text (string) | yes | no | +| voice/paper-friendly | [bibs](https://github.com/coderofsalvation/hashtagbibs) | no | +| leaves (dictated) text intact | yes | no | +| markup language | just an appendix | ~4 different | +| polyglot format | no | yes | +| easy to copy/paste content+metadata| yes | up to application | +| easy to write/repair for layman | yes | depends | +| easy to (de)serialize | yes (fits on A4 paper) | depends | +| infrastructure | selfcontained (plain text) | (semi)networked | +| freeform tagging/annotation | yes, terse | yes, verbose | +| can be appended to text-content | yes | up to application | +| copy-paste text preserves metadata | yes | up to application | +| emoji | yes | depends on encoding | +| predicates | free | semi pre-determined | +| implementation/network overhead | no | depends | +| used in (physical) books/PDF | yes (visual-meta) | no | +| terse non-verb predicates | yes | no | +| nested structures | no (but: BibTex rulers) | yes | + +> To keep XR Fragments a lightweight spec, BibTeX is used for rudimentary text/spatial tagging (not JSON, RDF or a scripting language because they're harder to write/speak/repair.). + +Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec). + +## XR Text example parser + + +1. The XR Fragments spec does not aim to harden the BiBTeX format +2. respect multi-line BibTex values because of [the core principle](#core-principle) +3. Expand hashtag(bibs) and rulers (like `${visual-meta-start}`) according to the [hashtagbibs spec](https://github.com/coderofsalvation/hashtagbibs) +4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype `text/plain;charset=utf-8;bib=^@` + +Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes: + +``` +xrtext = { + + expandBibs: (text) => { + let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}} + text.replace( bibs.regex , (m,k,v) => { + tok = m.substr(1).split("@") + match = tok.shift() + if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` ) + else if( match.substr(-1) == '#' ) + bibs.tags[match] = `@{${match.replace(/#/,'')}}` + else bibs.tags[match] = `@${match}{${match},\n}` + }) + return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n') + }, + + decode: (str) => { + // bibtex: ↓@ ↓ ↓property ↓end + let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] + let tags = [], text='', i=0, prop='' + let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/) + for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ ) + text += lines[i]+'\n' + + bibtex = lines.join('\n').substr( text.length ) + bibtex.split( pat[0] ).map( (t) => { + try{ + let v = {} + if( !(t = t.trim()) ) return + if( tag = t.match( pat[1] ) ) tag = tag[0] + if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) + t = t.substr( tag.length ) + t.split( pat[2] ) + .map( kv => { + if( !(kv = kv.trim()) || kv == "}" ) return + v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) + }) + tags.push( { k:tag, v } ) + }catch(e){ console.error(e) } + }) + return {text, tags} + }, + + encode: (text,tags) => { + let str = text+"\n" + for( let i in tags ){ + let item = tags[i] + if( item.ruler ){ + str += `@${item.ruler}\n` + continue; + } + str += `@${item.k}\n` + for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n` + str += `}\n` + } + return str + } +} +``` + +The above functions (de)multiplexe text/metadata, expands bibs, (de)serialize bibtex (and all fits more or less on one A4 paper) + +> above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language. + +``` +str = ` +hello world +here are some hashtagbibs followed by bibtex: + +#world +#hello@greeting +#another-section# + +@{some-section} +@flap{ + asdf = {23423} +}` + +var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex +tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag +tags.push({ k:'bar{', v:{abc:123} }) // add tag +console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together +``` +This expands to the following (hidden by default) BibTex appendix: + +``` +hello world +here are some hashtagbibs followed by bibtex: + +@{some-section} +@flap{ + asdf = {1} +} +@world{world, +} +@greeting{hello, +} +@{another-section} +@bar{ + abc = {123} +} +``` + +# HYPER copy/paste + +The previous example, offers something exciting compared to simple copy/paste of 3D objects or text. +XR Text according to the XR Fragment spec, allows HYPER-copy/paste: time, space and text interlinked. +Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways: + +1. time/space: 3D object (current animation-loop) +1. text: TeXt object (including BibTeX/visual-meta if any) +1. interlinked: Collected objects by visual-meta tag + +# Security Considerations + +Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can : + +* filter out sensitive data when copy/pasting (XR text with `class:secret` e.g.) + +# IANA Considerations + +This document has no IANA actions. + +# Acknowledgments + +TODO acknowledge. diff --git a/doc/RFC_XR_Fragments.txt b/doc/RFC_XR_Fragments.txt index 4584e49..472b636 100644 --- a/doc/RFC_XR_Fragments.txt +++ b/doc/RFC_XR_Fragments.txt @@ -3,7 +3,7 @@ Internet Engineering Task Force L.R. van Kammen -Internet-Draft 7 September 2023 +Internet-Draft 8 September 2023 Intended status: Informational @@ -40,7 +40,7 @@ Status of This Memo time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on 10 March 2024. + This Internet-Draft will expire on 11 March 2024. Copyright Notice @@ -53,7 +53,7 @@ Copyright Notice -van Kammen Expires 10 March 2024 [Page 1] +van Kammen Expires 11 March 2024 [Page 1] Internet-Draft XR Fragments September 2023 @@ -83,11 +83,11 @@ Table of Contents 9.3. Bibs & BibTeX: lowest common denominator for linking data . . . . . . . . . . . . . . . . . . . . . . . . . . 13 9.4. XR Text example parser . . . . . . . . . . . . . . . . . 15 - 10. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 17 - 11. Security Considerations . . . . . . . . . . . . . . . . . . . 17 + 10. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 18 + 11. Security Considerations . . . . . . . . . . . . . . . . . . . 18 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 18 - 14. Appendix: Definitions . . . . . . . . . . . . . . . . . . . . 18 + 14. Appendix: Definitions . . . . . . . . . . . . . . . . . . . . 19 1. Introduction @@ -104,12 +104,12 @@ Table of Contents metadata 2. hasslefree tagging across text and spatial objects using bibs (https://github.com/coderofsalvation/tagbibs) / BibTags - (https://en.wikipedia.org/wiki/BibTeX) as appendix (see visual- + (https://en.wikipedia.org/wiki/BibTeX) appendices (see visual- meta (https://visual-meta.info) e.g.) -van Kammen Expires 10 March 2024 [Page 2] +van Kammen Expires 11 March 2024 [Page 2] Internet-Draft XR Fragments September 2023 @@ -165,7 +165,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 3] +van Kammen Expires 11 March 2024 [Page 3] Internet-Draft XR Fragments September 2023 @@ -221,7 +221,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 4] +van Kammen Expires 11 March 2024 [Page 4] Internet-Draft XR Fragments September 2023 @@ -277,7 +277,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 5] +van Kammen Expires 11 March 2024 [Page 5] Internet-Draft XR Fragments September 2023 @@ -333,7 +333,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 6] +van Kammen Expires 11 March 2024 [Page 6] Internet-Draft XR Fragments September 2023 @@ -389,7 +389,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 7] +van Kammen Expires 11 March 2024 [Page 7] Internet-Draft XR Fragments September 2023 @@ -413,11 +413,8 @@ Internet-Draft XR Fragments September 2023 case of emergency. Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred. - Ideally metadata must come *later with* text, but not *obfuscate* the - text, or *in another* file. - - | Humans first, machines (AI) later (core principle (#core- - | principle) + Ideally metadata must come *with* text, but not *obfuscate* the text, + or *in another* file. This way: @@ -441,18 +438,18 @@ Internet-Draft XR Fragments September 2023 funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see the core principle (#core-principle)) + This allows recursive connections between text itself, as well as 3D + objects and vice versa, using *BibTags* : -van Kammen Expires 10 March 2024 [Page 8] + +van Kammen Expires 11 March 2024 [Page 8] Internet-Draft XR Fragments September 2023 - This allows recursive connections between text itself, as well as 3D - objects and vice versa, using *BibTags* : - +---------------------------------------------+ +------------------+ | My Notes | | / \ | | | | / \ | @@ -464,6 +461,10 @@ Internet-Draft XR Fragments September 2023 | } | +---------------------------------------------+ + | The enduser can add connections by speaking/typing/scanning + | hashtagbibs (https://github.com/coderofsalvation/hashtagbibs) + | which the XR Browser can expand to (hidden) BibTags. + This allows instant realtime tagging of objects at various scopes: @@ -500,8 +501,7 @@ Internet-Draft XR Fragments September 2023 - -van Kammen Expires 10 March 2024 [Page 9] +van Kammen Expires 11 March 2024 [Page 9] Internet-Draft XR Fragments September 2023 @@ -557,7 +557,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 10] +van Kammen Expires 11 March 2024 [Page 10] Internet-Draft XR Fragments September 2023 @@ -583,41 +583,46 @@ Internet-Draft XR Fragments September 2023 text/plain;charset=US-ASCII - to a green eco-friendly: + to a hashtagbib(tex)-friendly one: text/plain;charset=utf-8;bib=^@ - This indicates that bibs (https://github.com/coderofsalvation/ - tagbibs) and bibtags (https://en.wikipedia.org/wiki/BibTeX) matching - regex ^@ will automatically get filtered out, in order to: + This indicates that: - * automatically detect links between textual/spatial objects - * detect opiniated bibtag appendices (visual-meta (https://visual- - meta.info) e.g.) + * utf-8 is supported by default + * hashtagbibs (https://github.com/coderofsalvation/hashtagbibs) are + expanded to bibtags (https://en.wikipedia.org/wiki/BibTeX) + * lines matching regex ^@ will automatically get filtered out, in + order to: + * links between textual/spatial objects can automatically be + detected + * bibtag appendices (visual-meta (https://visual-meta.info) can be + interpreted e.g. - It's concept is similar to literate programming, which empower local/ - remote responses to: + | for more info on this mimetype see bibs + | (https://github.com/coderofsalvation/hashtagbibs) - * (de)multiplex human text and metadata in one go (see the core - principle (#core-principle)) + Advantages: + + * out-of-the-box (de)multiplex human text and metadata in one go + (see the core principle (#core-principle)) * no network-overhead for metadata (see the core principle (#core- principle)) * ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios + + + +van Kammen Expires 11 March 2024 [Page 11] + +Internet-Draft XR Fragments September 2023 + + * rich send/receive/copy-paste everywhere by default, metadata being retained (see the core principle (#core-principle)) * netto result: less webservices, therefore less servers, and overall better FPS in XR - - - - -van Kammen Expires 10 March 2024 [Page 11] - -Internet-Draft XR Fragments September 2023 - - | This significantly expands expressiveness and portability of human | tagged text, by *postponing machine-concerns to the end of the | human text* in contrast to literal interweaving of content and @@ -625,12 +630,6 @@ Internet-Draft XR Fragments September 2023 For all other purposes, regular mimetypes can be used (but are not required by the spec). - To keep XR Fragments a lightweight spec, BibTeX is used for text/ - spatial tagging (not a scripting language or RDF e.g.). - - | Applications are also free to attach any JSON(LD / RDF) to spatial - | objects using custom properties (but is not interpreted by this - | spec). 9.2. URL and Data URI @@ -656,7 +655,7 @@ Internet-Draft XR Fragments September 2023 e.g.). | additional tagging using bibs - | (https://github.com/coderofsalvation/tagbibs): to tag spatial + | (https://github.com/coderofsalvation/hashtagbibs): to tag spatial | object note_canvas with 'todo', the enduser can type or speak | @note_canvas@todo @@ -669,7 +668,8 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 12] + +van Kammen Expires 11 March 2024 [Page 12] Internet-Draft XR Fragments September 2023 @@ -681,8 +681,8 @@ Internet-Draft XR Fragments September 2023 | └── ◻ rentalhouse | | └ class: house <----------------- matches -------+ | └ ◻ note | | - | └ src:`data: todo: call owner | bib | - | @owner@house@todo | ----> expands to @house{owner, + | └ src:`data: todo: call owner | hashtagbib | + | #owner@house@todo | ----> expands to @house{owner, | | bibtex: } | ` | @contact{ +------------------------------------------------+ } @@ -703,8 +703,8 @@ Internet-Draft XR Fragments September 2023 | "When a car breaks down, the ones *without* turbosupercharger are | easier to fix" - Unlike XML or JSON, the typeless, unnested, everything-is-text nature - of BibTeX tags is a great advantage for introspection. + Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated, + hence a great advantage for introspection. It's a missing sensemaking precursor to extrospective RDF. BibTeX-appendices are already used in the digital AND physical world (academic books, visual-meta (https://visual-meta.info)), perhaps due @@ -725,7 +725,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 13] +van Kammen Expires 11 March 2024 [Page 13] Internet-Draft XR Fragments September 2023 @@ -744,7 +744,7 @@ Internet-Draft XR Fragments September 2023 +----------------+-------------------------------------+---------------+ |voice/paper- |bibs |no | |friendly |(https://github.com/coderofsalvation/| | - | |tagbibs) | | + | |hashtagbibs) | | +----------------+-------------------------------------+---------------+ |leaves |yes |no | |(dictated) text | | | @@ -781,7 +781,7 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 14] +van Kammen Expires 11 March 2024 [Page 14] Internet-Draft XR Fragments September 2023 @@ -808,60 +808,80 @@ Internet-Draft XR Fragments September 2023 Table 6 + | To keep XR Fragments a lightweight spec, BibTeX is used for + | rudimentary text/spatial tagging (not JSON, RDF or a scripting + | language because they're harder to write/speak/repair.). + + Applications are also free to attach any JSON(LD / RDF) to spatial + objects using custom properties (but is not interpreted by this + spec). + 9.4. XR Text example parser 1. The XR Fragments spec does not aim to harden the BiBTeX format - 2. However, respect multi-line BibTex values because of the core - principle (#core-principle) - 3. Expand bibs and rulers (like ${visual-meta-start}) according to - the tagbibs spec (https://github.com/coderofsalvation/tagbibs) + 2. respect multi-line BibTex values because of the core principle + (#core-principle) + 3. Expand hashtag(bibs) and rulers (like ${visual-meta-start}) + according to the hashtagbibs spec + (https://github.com/coderofsalvation/hashtagbibs) 4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype text/plain;charset=utf-8;bib=^@ Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes: -xrtext = { - - decode: (str) => { - // bibtex: ↓@ ↓ ↓property ↓end - let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] - let tags = [], text='', i=0, prop='' - var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}} - let lines = str.replace(/\r?\n/g,'\n').split(/\n/) - for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n' - - bibtex = lines.join('\n').substr( text.length ) - bibtex.replace( bibs.regex , (m,k,v) => { - tok = m.substr(1).split("@") -van Kammen Expires 10 March 2024 [Page 15] + + + + +van Kammen Expires 11 March 2024 [Page 15] Internet-Draft XR Fragments September 2023 - match = tok.shift() - tok.map( (t) => bibs.tags[match] = `@${t}{${match},\n}\n` ) - }) - bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '') - bibtex.split( pat[0] ).map( (t) => { - try{ - let v = {} - if( !(t = t.trim()) ) return - if( tag = t.match( pat[1] ) ) tag = tag[0] - if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) - t = t.substr( tag.length ) - t.split( pat[2] ) - .map( kv => { - if( !(kv = kv.trim()) || kv == "}" ) return - v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) - }) - tags.push( { k:tag, v } ) - }catch(e){ console.error(e) } - }) - return {text, tags} +xrtext = { + + expandBibs: (text) => { + let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}} + text.replace( bibs.regex , (m,k,v) => { + tok = m.substr(1).split("@") + match = tok.shift() + if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` ) + else if( match.substr(-1) == '#' ) + bibs.tags[match] = `@{${match.replace(/#/,'')}}` + else bibs.tags[match] = `@${match}{${match},\n}` + }) + return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n') + }, + + decode: (str) => { + // bibtex: ↓@ ↓ ↓property ↓end + let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] + let tags = [], text='', i=0, prop='' + let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/) + for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ ) + text += lines[i]+'\n' + + bibtex = lines.join('\n').substr( text.length ) + bibtex.split( pat[0] ).map( (t) => { + try{ + let v = {} + if( !(t = t.trim()) ) return + if( tag = t.match( pat[1] ) ) tag = tag[0] + if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) + t = t.substr( tag.length ) + t.split( pat[2] ) + .map( kv => { + if( !(kv = kv.trim()) || kv == "}" ) return + v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) + }) + tags.push( { k:tag, v } ) + }catch(e){ console.error(e) } + }) + return {text, tags} }, encode: (text,tags) => { @@ -870,6 +890,14 @@ Internet-Draft XR Fragments September 2023 let item = tags[i] if( item.ruler ){ str += `@${item.ruler}\n` + + + +van Kammen Expires 11 March 2024 [Page 16] + +Internet-Draft XR Fragments September 2023 + + continue; } str += `@${item.k}\n` @@ -886,22 +914,14 @@ Internet-Draft XR Fragments September 2023 | above can be used as a startingpoint for LLVM's to translate/ | steelman to a more formal form/language. - - - - - - - -van Kammen Expires 10 March 2024 [Page 16] - -Internet-Draft XR Fragments September 2023 - - str = ` hello world +here are some hashtagbibs followed by bibtex: + +#world +#hello@greeting +#another-section# -@hello@greeting @{some-section} @flap{ asdf = {23423} @@ -912,21 +932,48 @@ tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag tags.push({ k:'bar{', v:{abc:123} }) // add tag console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together - This outputs: + This expands to the following (hidden by default) BibTex appendix: + + + + + + + + + + + + + + + + +van Kammen Expires 11 March 2024 [Page 17] + +Internet-Draft XR Fragments September 2023 + hello world + here are some hashtagbibs followed by bibtex: - - @greeting{hello, - } @{some-section} @flap{ asdf = {1} } + @world{world, + } + @greeting{hello, + } + @{another-section} @bar{ abc = {123} } + | when an XR browser updates the human text, a quick scan for + | nonmatching tags (@book{nonmatchingbook e.g.) should be performed + | and prompt the enduser for deleting them. + 10. HYPER copy/paste The previous example, offers something exciting compared to simple @@ -947,13 +994,6 @@ console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back to * filter out sensitive data when copy/pasting (XR text with class:secret e.g.) - - -van Kammen Expires 10 March 2024 [Page 17] - -Internet-Draft XR Fragments September 2023 - - 12. IANA Considerations This document has no IANA actions. @@ -962,6 +1002,14 @@ Internet-Draft XR Fragments September 2023 * NLNET (https://nlnet.nl) * Future of Text (https://futureoftext.org) + + + +van Kammen Expires 11 March 2024 [Page 18] + +Internet-Draft XR Fragments September 2023 + + * visual-meta.info (https://visual-meta.info) 14. Appendix: Definitions @@ -1002,14 +1050,6 @@ Internet-Draft XR Fragments September 2023 | requestless | metadata which never spawns new requests | | metadata | (unlike RDF/HTML, which can cause framerate- | | | dropping, hence not used a lot in games) | - - - -van Kammen Expires 10 March 2024 [Page 18] - -Internet-Draft XR Fragments September 2023 - - +---------------+----------------------------------------------+ | FPS | frames per second in spatial experiences | | | (games,VR,AR e.g.), should be as high as | @@ -1018,6 +1058,14 @@ Internet-Draft XR Fragments September 2023 | introspective | inward sensemaking ("I feel this belongs to | | | that") | +---------------+----------------------------------------------+ + + + +van Kammen Expires 11 March 2024 [Page 19] + +Internet-Draft XR Fragments September 2023 + + | extrospective | outward sensemaking ("I'm fairly sure John | | | is a person who lives in oklahoma") | +---------------+----------------------------------------------+ @@ -1031,6 +1079,10 @@ Internet-Draft XR Fragments September 2023 | | for plaintext | +---------------+----------------------------------------------+ | BibTag | a BibTeX tag | + +---------------+----------------------------------------------+ + | (hashtag)bibs | an easy to speak/type/scan tagging SDL (see | + | | here (https://github.com/coderofsalvation/ | + | | hashtagbibs) | +---------------+----------------------------------------------+ Table 7 @@ -1061,4 +1113,8 @@ Internet-Draft XR Fragments September 2023 -van Kammen Expires 10 March 2024 [Page 19] + + + + +van Kammen Expires 11 March 2024 [Page 20] diff --git a/doc/RFC_XR_Fragments.xml b/doc/RFC_XR_Fragments.xml index cf5a8a8..aa1d5f6 100644 --- a/doc/RFC_XR_Fragments.xml +++ b/doc/RFC_XR_Fragments.xml @@ -35,7 +35,7 @@ XR Fragments allows us to enrich/connect existing dataformats, by recursive use
  1. addressibility and navigation of 3D scenes/objects: URI Fragments + src/href spatial metadata
  2. -
  3. hasslefree tagging across text and spatial objects using bibs / BibTags as appendix (see visual-meta e.g.)
  4. +
  5. hasslefree tagging across text and spatial objects using bibs / BibTags appendices (see visual-meta e.g.)
NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
@@ -330,10 +330,9 @@ The most advanced human will probably not shout <h1>FIRE!</h1> -Ideally metadata must come later with text, but not obfuscate the text, or in another file.
+Ideally metadata must come with text, but not obfuscate the text, or in another file.
-
Humans first, machines (AI) later (core principle -
This way: +This way:
  1. XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata at the end of content (like visual-meta).
  2. @@ -356,7 +355,8 @@ Ideally metadata must come later with text, but not obf | } | +---------------------------------------------+ -This allows instant realtime tagging of objects at various scopes: +
    The enduser can add connections by speaking/typing/scanning hashtagbibs which the XR Browser can expand to (hidden) BibTags. +
    This allows instant realtime tagging of objects at various scopes: @@ -405,18 +405,22 @@ The simplicity of appending BibTeX 'tags' (humans first, machines later) is also The src-values work as expected (respecting mime-types), however:The XR Fragment specification bumps the traditional default browser-mimetypetext/plain;charset=US-ASCII -to a green eco-friendly: +to a hashtagbib(tex)-friendly one:text/plain;charset=utf-8;bib=^@ -This indicates that bibs and bibtags matching regex ^@ will automatically get filtered out, in order to: +This indicates that:
      -
    • automatically detect links between textual/spatial objects
    • -
    • detect opiniated bibtag appendices (visual-meta e.g.)
    • +
    • utf-8 is supported by default
    • +
    • hashtagbibs are expanded to bibtags
    • +
    • lines matching regex ^@ will automatically get filtered out, in order to:
    • +
    • links between textual/spatial objects can automatically be detected
    • +
    • bibtag appendices (visual-meta can be interpreted e.g.
    -It's concept is similar to literate programming, which empower local/remote responses to: +
    for more info on this mimetype see bibs +
    Advantages:
      -
    • (de)multiplex human text and metadata in one go (see the core principle)
    • +
    • out-of-the-box (de)multiplex human text and metadata in one go (see the core principle)
    • no network-overhead for metadata (see the core principle)
    • ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios
    • rich send/receive/copy-paste everywhere by default, metadata being retained (see the core principle)
    • @@ -424,10 +428,8 @@ The simplicity of appending BibTeX 'tags' (humans first, machines later) is also
    This significantly expands expressiveness and portability of human tagged text, by postponing machine-concerns to the end of the human text in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
    For all other purposes, regular mimetypes can be used (but are not required by the spec).
    - -To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).
    -
    Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec). -
    + +
    URL and Data URI @@ -448,7 +450,7 @@ To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging The beauty is that text (AND visual-meta) in Data URI promotes rich copy-paste. In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas'). The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.). -
    additional tagging using bibs: to tag spatial object note_canvas with 'todo', the enduser can type or speak @note_canvas@todo +
    additional tagging using bibs: to tag spatial object note_canvas with 'todo', the enduser can type or speak @note_canvas@todo
    The mapping between 3D objects and text (src-data) is simple (the : Example: @@ -459,8 +461,8 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)- | └── ◻ rentalhouse | | └ class: house <----------------- matches -------+ | └ ◻ note | | - | └ src:`data: todo: call owner | bib | - | @owner@house@todo | ----> expands to @house{owner, + | └ src:`data: todo: call owner | hashtagbib | + | #owner@house@todo | ----> expands to @house{owner, | | bibtex: } | ` | @contact{ +------------------------------------------------+ } @@ -475,7 +477,7 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)-
    Bibs & BibTeX: lowest common denominator for linking data
    "When a car breaks down, the ones without turbosupercharger are easier to fix" -
    Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.
    +
    Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated, hence a great advantage for introspection.
    It's a missing sensemaking precursor to extrospective RDF.
    @@ -523,7 +525,7 @@ In that sense, it's one step up from the .ini fileformat (which has nev
    - + @@ -623,58 +625,67 @@ In that sense, it's one step up from the .ini fileformat (which has nev -
    voice/paper-friendlybibsbibs no
    yes
    +
    To keep XR Fragments a lightweight spec, BibTeX is used for rudimentary text/spatial tagging (not JSON, RDF or a scripting language because they're harder to write/speak/repair.). +
    Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec). +
    XR Text example parser
    1. The XR Fragments spec does not aim to harden the BiBTeX format
    2. -
    3. However, respect multi-line BibTex values because of the core principle
    4. -
    5. Expand bibs and rulers (like ${visual-meta-start}) according to the tagbibs spec
    6. +
    7. respect multi-line BibTex values because of the core principle
    8. +
    9. Expand hashtag(bibs) and rulers (like ${visual-meta-start}) according to the hashtagbibs spec
    10. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype text/plain;charset=utf-8;bib=^@
    Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes: xrtext = { - - decode: (str) => { - // bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end - let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] - let tags = [], text='', i=0, prop='' - var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}} - let lines = str.replace(/\r?\n/g,'\n').split(/\n/) - for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n' - bibtex = lines.join('\n').substr( text.length ) - bibtex.replace( bibs.regex , (m,k,v) => { - tok = m.substr(1).split("@") - match = tok.shift() - tok.map( (t) => bibs.tags[match] = `@${t}{${match},\n}\n` ) - }) - bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '') - bibtex.split( pat[0] ).map( (t) => { - try{ - let v = {} - if( !(t = t.trim()) ) return - if( tag = t.match( pat[1] ) ) tag = tag[0] - if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) - t = t.substr( tag.length ) - t.split( pat[2] ) - .map( kv => { - if( !(kv = kv.trim()) || kv == "}" ) return - v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) - }) - tags.push( { k:tag, v } ) - }catch(e){ console.error(e) } - }) - return {text, tags} + expandBibs: (text) => { + let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}} + text.replace( bibs.regex , (m,k,v) => { + tok = m.substr(1).split("@") + match = tok.shift() + if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` ) + else if( match.substr(-1) == '#' ) + bibs.tags[match] = `@{${match.replace(/#/,'')}}` + else bibs.tags[match] = `@${match}{${match},\n}` + }) + return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n') }, + decode: (str) => { + // bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end + let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] + let tags = [], text='', i=0, prop='' + let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/) + for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ ) + text += lines[i]+'\n' + + bibtex = lines.join('\n').substr( text.length ) + bibtex.split( pat[0] ).map( (t) => { + try{ + let v = {} + if( !(t = t.trim()) ) return + if( tag = t.match( pat[1] ) ) tag = tag[0] + if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) + t = t.substr( tag.length ) + t.split( pat[2] ) + .map( kv => { + if( !(kv = kv.trim()) || kv == "}" ) return + v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) + }) + tags.push( { k:tag, v } ) + }catch(e){ console.error(e) } + }) + return {text, tags} + }, + encode: (text,tags) => { let str = text+"\n" for( let i in tags ){ let item = tags[i] - if( item.ruler ){ + if( item.ruler ){ str += `@${item.ruler}\n` continue; } @@ -682,7 +693,7 @@ In that sense, it's one step up from the .ini fileformat (which has nev for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n` str += `}\n` } - return str + return str } } @@ -691,8 +702,12 @@ In that sense, it's one step up from the .ini fileformat (which has nev
str = ` hello world +here are some hashtagbibs followed by bibtex: + +#world +#hello@greeting +#another-section# -@hello@greeting @{some-section} @flap{ asdf = {23423} @@ -703,22 +718,26 @@ tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag tags.push({ k:'bar{', v:{abc:123} }) // add tag console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together -This outputs: +This expands to the following (hidden by default) BibTex appendix: hello world +here are some hashtagbibs followed by bibtex: - -@greeting{hello, -} @{some-section} @flap{ asdf = {1} } +@world{world, +} +@greeting{hello, +} +@{another-section} @bar{ abc = {123} } - +
when an XR browser updates the human text, a quick scan for nonmatching tags (@book{nonmatchingbook e.g.) should be performed and prompt the enduser for deleting them. +
HYPER copy/paste @@ -848,6 +867,11 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share BibTag a BibTeX tag + + +(hashtag)bibs +an easy to speak/type/scan tagging SDL (see here +
diff --git a/doc/RFC_XR_Text_Fragments.md b/doc/RFC_XR_Text_Fragments.md deleted file mode 100644 index ec2892f..0000000 --- a/doc/RFC_XR_Text_Fragments.md +++ /dev/null @@ -1,204 +0,0 @@ -%%% -Title = "XR Macros" -area = "Internet" -workgroup = "Internet Engineering Task Force" - -[seriesInfo] -name = "XR-Macros" -value = "draft-XRTEXTFRAGMENTS-leonvankammen-00" -stream = "IETF" -status = "informational" - -date = 2023-04-12T00:00:00Z - -[[author]] -initials="L.R." -surname="van Kammen" -fullname="L.R. van Kammen" - -%%% - - - - - -.# Abstract - -This draft offers a specification for embedding macros in existing 3D scenes/assets, to offer simple interactions and configure the renderer further.
-Together with URI Fragments, it allows for rich immersive experiences without the need of a complicated sandboxed scripting languages. - -> Almost every idea in this document is demonstrated at [https://xrfragment.org](https://xrfragment.org), as this spec was created during the [XR Fragments](https://xrfragment.org) spec. - -{mainmatter} - -# Introduction - -How can we add more features to existing text & 3D scenes, without introducing new dataformats?
-Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
-Their lowest common denominator is: (co)authoring using plain text.
-Therefore, XR Macros allows us to enrich/connect existing dataformats, by offering a polyglot notation based on existing notations:
- -1. getting/setting common used 3D properties using querystring- or JSON-notation -1. querying 3D properties using the lightweight searchengine notation used in [XR Fragments](https://xrfragment.org) - -> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible - -# Core principle - -1. XR Macros use querystrings, but are HTML-agnostic (though pseudo-XR Fragment browsers **can** be implemented on top of HTML/Javascript). -1. XR Macros represents setting/getting common used properties found in all popular 3D frameworks/(game)editors/internet browsers. -1. XR Macros acts as simple eventhandlers for URI Fragments - -# Conventions and Definitions - -See appendix below in case certain terms are not clear. - -# List of XR Macros - -(XR) Macros can be embedded in 3D assets/scenes.
-Macros enrich existing spatial content with a lowcode, limited logic-layer, by recursive (economic) use of the querystring syntax (which search engines and [XR Fragments](https://xrfragment.org) already uses.
-This is done by allowing string/integer variables, and the `|` symbol to roundrobin variable values.
-Macros also act as events, so more serious scripting languages can react to them as well.
- -## Usecase: click object - -| custom property | value | trigger when | -|-----------------|--------------------------|------------------------| -| !clickme | bg=1,1,1&foo=2 | object clicked | - -## Usecase: conditional click object - -| custom property | value | trigger when | -|-----------------|--------------------------|-----------------------------| -| # | foo=1 | scene | -| !clickme | q=foo>2&bg=1,1,1 | object clicked and foo > 2 | - -> when a user clicks an object with the custom properties above, it should set the backgroundcolor to `1,1,1` when `foo` is greater than `2` (see previous example) - -## Usecase: click object (roundrobin) - -| custom property | value | trigger when | -|-----------------|--------------------------|------------------------| -| !clickme | day|noon|night | object clicked | -| day | bg=1,1,1 | roundrobin | -| noon | bg=0.5,0.5,0.5 | roundrobin | -| night | bg=0,0,0&foo=2 | roundrobin | - -> when a user clicks an object with the custom properties above, it should trigger either `day` `noon` or `night` in roundrobin fashion. - -## Usecase: click object, URI fragment and scene load - -| custom property | value | trigger when | -|-----------------|--------------------------|------------------------| -| # | random | scene loaded | -| #random | random | URL contains #random | -| !random | day|noon|night | #random, # or click | -| day | bg=1,1,1 | roundrobin | -| noon | bg=0.5,0.5,0.5 | roundrobin | -| night | bg=0,0,0&foo=2 | roundrobin | - -## Usecase: present context menu with options - -| custom property | value | trigger when | -|-----------------|--------------------------|------------------------| -| !random | day|noon|night | clicked in contextmenu | -| !day | bg=1,1,1 | clicked in contextmenu | -| !noon | bg=0.5,0.5,0.5 | clicked in contextmenu | -| !night | bg=0,0,0&foo=2 | clicked in contextmenu | - -> The XR Browser should offer a contextmenu with these options when more than one `!`-macro is present on an object. - -# Security Considerations - - -# IANA Considerations - -This document has no IANA actions. - -# Acknowledgments - -* [NLNET](https://nlnet.nl) -* [Future of Text](https://futureoftext.org) -* [visual-meta.info](https://visual-meta.info) - -# Appendix: Definitions - -|definition | explanation | -|----------------------|-------------------------------------------------------------------------------------------------------------------------------| -|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) | -|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. | -|XR fragments | URI Fragment with spatial hints like `#pos=0,0,0&t=1,100` e.g. | -|query | an URI Fragment-operator which queries object(s) from a scene like `#q=cube` | -|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible | -|`◻` | ascii representation of an 3D object/mesh | -|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words | -