update documentation

This commit is contained in:
Leon van Kammen 2023-09-02 21:44:57 +02:00
parent 3cd4153822
commit d8ff3c056c
6 changed files with 787 additions and 213 deletions

View File

@ -1,49 +0,0 @@
<link rel="stylesheet" href="style.css"/>
<link href="https://fonts.cdnfonts.com/css/montserrat" rel="stylesheet"/>
> version 1.0.0
date: 2023-04-27T22:44:39+0200<br>
[![Actions Status](https://github.com/coderofsalvation/xrfragment/workflows/test/badge.svg)](https://github.com/coderofsalvation/xrfragment/actions)
# XRFragment Grammar
```
reserved = gen-delims / sub-delims
gen-delims = "#" / "&"
sub-delims = "," / "|" / "="
```
<br>
> Example: `://foo.com/my3d.asset#pos=1,0,0&prio=-5&t=0,100|100,200`
<br>
| Explanation | |
|-|-|
| `x=1,2,3` | vector/coordinate argument e.g. |
| `x=foo\|bar|1,2,3|1.0` | the `\|` character is used for:<br>1.specifying `n` arguments for xrfragment `x`<br>2. roundrobin of values (in case provided arguments exceeds `n` of `x` for #1) when triggered by browser URI (clicking `href` e.g.)|
| `https://x.co/1.gltf||xyz://x.co/1.gltf` | multi-protocol/fallback urls |
| `.mygroup` | query-alias for `class:mygroup` |
> Focus: hasslefree 3D vector-data (`,`), multi-protocol/fallback-linking & dynamic values (`|`), and CSS-piggybacking (`.mygroup`)
# URI parser
> icanhazcode? yes, see [URI.hx](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/URI.hx)
1. fragment URI starts with `#`
1. fragments are split by `&`
1. store key/values into a associative array or dynamic object
1. loop thru each fragment
1. for each fragment split on `=` to separate key/values
1. fragment-values are urlencoded (space becomes `+` using `encodeUriComponent` e.g.)
1. every recognized fragment key/value-pair is added to a central map/associative array/object
# XR Fragments parser
> icanhazcode? yes, see [Parser.hx](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Parser.hx)
the gist of it:
1. check if param exist

View File

@ -14,7 +14,6 @@
body{
font-family: monospace;
max-width: 900px;
text-align: justify;
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
@ -23,6 +22,22 @@
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
a,a:visited,a:active{ color: #70f; }
code{
border: 1px solid #AAA;
border-radius: 3px;
padding: 0px 5px 2px 5px;
}
pre>code{
border:none;
border-radius:0px;
padding:0;
}
blockquote{
padding-left: 30px;
margin: 0;
border-left: 5px solid #CCC;
}
</style>
@ -52,7 +67,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<p>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?
Historically, there&rsquo;s many attempts to create the ultimate markuplanguage or 3D fileformat.
However, thru the lens of authorina,g their lowest common denominator is still: plain text.
However, thru the lens of authoring their lowest common denominator is still: plain text.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:</p>
<ul>
@ -70,6 +85,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<li>src: a (HTML-piggybacked) metadata-attribute of a 3D object which instances content</li>
<li>href: a (HTML-piggybacked) metadata-attribute of a 3D object which links to content</li>
<li>query: an URI Fragment-operator which queries object(s) from a scene (<code>#q=cube</code>)</li>
<li><a href="https://visual.meta.info">visual-meta</a>: metadata appended to text which is only indirectly visible/editable in XR.</li>
</ul>
<p>{::boilerplate bcp14-tagged}</p>
@ -78,13 +94,17 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<p>Here&rsquo;s an ascii representation of a 3D scene-graph which contains 3D objects (<code></code>) and their metadata:</p>
<pre><code> index.gltf
├── ◻ buttonA
│ └ href: #pos=1,0,1&amp;t=100,200
└── ◻ buttonB
└ href: other.fbx
<pre><code> +--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&amp;t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx |
| |
+--------------------------------------------------------+
</code></pre>
@ -94,37 +114,149 @@ In case of <code>buttonA</code> the end-user will be teleported to another locat
<h1 id="navigating-text">Navigating text</h1>
<p>TODO</p>
<p>Text in XR has to be unobtrusive, for readers as well as authors.
We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched <em>afterwards</em> (lazy metadata).
Therefore, XR Fragment-compliant text will just be plain text, and <strong>not yet-another-markuplanguage</strong>.
In contrast to markup languages, this means humans need to be always served first, and machines later.</p>
<blockquote>
<p>Basically, a direct feedbackloop between unobtrusive text and human eye.</p>
</blockquote>
<p>Reality has shown that outsourcing rich textmanipulation to commercial formats or mono-markup browsers (HTML) have there usecases, but
also introduce barriers to thought-translation (which uses simple words).
As Marshall MCluhan said: we have become irrevocably involved with, and responsible for, each other.</p>
<p>In order enjoy hasslefree batteries-included programmable text (glossaries, flexible views, drag-drop e.g.), XR Fragment supports
<a href="https://visual.meta.info">visual-meta</a>(data).</p>
<h2 id="default-data-uri-mimetype">Default Data URI mimetype</h2>
<p>The XR Fragment specification bumps the traditional default browser-mimetype</p>
<p><code>text/plain;charset=US-ASCII</code></p>
<p>into:</p>
<p><code>text/plain;charset=utf-8;visual-meta=1</code></p>
<p>This means that <a href="https://visual.meta.info">visual-meta</a>(data) can be appended to plain text without being displayed.</p>
<h3 id="url-and-data-uri">URL and Data URI</h3>
<pre><code> +--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @{visual-meta-start} |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
| |
| |
+--------------------------------------------------------------+
</code></pre>
<p>The difference is that text (+visual-meta data) in Data URI is saved into the scene, which also promotes rich copy-paste.
In both cases will the text get rendered immediately (onto a plane geometry, hence the name &lsquo;_canvas&rsquo;).
The enduser can access visual-meta(data)-fields only after interacting with the object.</p>
<blockquote>
<p>NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru <code>src</code>-metadata, it is just that <code>text/plain;charset=utf-8;visual-meta=1</code> is the minimum requirement.</p>
</blockquote>
<h2 id="omnidirectional-xr-annotations">omnidirectional XR annotations</h2>
<pre><code> +---------------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ todo |
| │ └ src:`data:learn about ARC @{visual-meta-start}...`|
| │ |
| └── ◻ ARC |
| └── ◻ plane |
| └ src: `data:ARC was revolutionary |
| @{visual-meta-start} |
| @{glossary-start} |
| @entry{ |
| name = {ARC}, |
| description = {Engelbart Concept: |
| Augmentation Research Center, |
| The name of Doug's lab at SRI. |
| }, |
| }` |
| |
+---------------------------------------------------------------+
</code></pre>
<p>Here we can see an 3D object of ARC, to which the enduser added a textnote (basically a plane geometry with <code>src</code>).
The enduser can view/edit visual-meta(data)-fields only after interacting with the object.
This allows the 3D scene to perform omnidirectional features for free, by omni-connecting the word &lsquo;ARC&rsquo;:</p>
<ul>
<li>the ARC object can draw a line to the &lsquo;ARC was revolutionary&rsquo;-note</li>
<li>the &lsquo;ARC was revolutionary&rsquo;-note can draw line to the &lsquo;learn about ARC&rsquo;-note</li>
<li>the &lsquo;learn about ARC&rsquo;-note can draw a line to the ARC 3D object</li>
</ul>
<h1 id="hyper-copy-paste">HYPER copy/paste</h1>
<p>The previous example, offers something exciting compared to simple textual copy-paste.
, XR Fragment offers 4D- and HYPER- copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</p>
<ul>
<li>copy ARC 3D object (incl. animation) &amp; paste elsewhere including visual-meta(data)</li>
<li>select the word ARC in any text, and paste a bundle of anything ARC-related</li>
</ul>
<h2 id="plain-text-with-optional-visual-meta">Plain Text (with optional visual-meta)</h2>
<p>In contrast to markuplanguage, the (dictated/written) text needs no parsing, stays intact, by postponing metadata to the appendix.</p>
<p>This allows for a very economic XR way to:</p>
<ul>
<li>directly write, dictate, render text (=fast, without markup-parser-overhead)</li>
<li>add/load metadata later (if provided)</li>
<li>enduser interactions with text (annotations,mutations) can be reflected back into the visual-meta(data) Data URI</li>
<li>copy/pasting of text will automatically cite the (mutated) source</li>
<li>allows annotating 3D objects as if they were textual representations (convert 3D document to text)</li>
</ul>
<blockquote>
<p>NOTE: visualmeta never breaks the original intended text (in contrast to forgetting a html closing-tag e.g.)</p>
</blockquote>
<h1 id="embedding-3d-content">Embedding 3D content</h1>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph with 3D objects (<code></code>) which embeds remote &amp; local 3D objects (<code></code>) (without) using queries:</p>
<pre><code> +------------------------------------------------------------+ +---------------------------+
| | | |
| index.gltf | | rescue.com/aquarium.gltf |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bassfish |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+------------------------------------------------------------+
<pre><code> +--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
</code></pre>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).
Also, after lazy-loading <code>rescue.com/aquarium.gltf</code>, only the queried objects <code>bassfish</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.
Resizing will be happen accordingly to its placeholder object (<code>aquariumcube</code>), see chapter Scaling.</p>
<h1 id="embedding-text">Embedding text</h1>
<h1 id="list-of-xr-uri-fragments">List of XR URI Fragments</h1>
<h1 id="security-considerations">Security Considerations</h1>

View File

@ -26,7 +26,6 @@ fullname="L.R. van Kammen"
body{
font-family: monospace;
max-width: 900px;
text-align: justify;
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
@ -35,6 +34,22 @@ fullname="L.R. van Kammen"
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
a,a:visited,a:active{ color: #70f; }
code{
border: 1px solid #AAA;
border-radius: 3px;
padding: 0px 5px 2px 5px;
}
pre>code{
border:none;
border-radius:0px;
padding:0;
}
blockquote{
padding-left: 30px;
margin: 0;
border-left: 5px solid #CCC;
}
</style>
@ -61,8 +76,6 @@ This draft offers a specification for 4D URLs & navigation, to link 3D scenes an
The specification promotes spatial addressibility, sharing, navigation, query-ing and interactive text across for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) & [visual-meta](https://visual-meta.info).
{mainmatter}
# Introduction
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
@ -71,7 +84,7 @@ However, thru the lens of authoring their lowest common denominator is still: pl
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:
* addressibility & navigation of 3D objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + (src/href) metadata
* addressibility & navigation of text objects: [visual-meta](https://visual-meta.info)
* bi-directional links between text and spatial objects: [visual-meta](https://visual-meta.info)
# Conventions and Definitions
@ -82,21 +95,43 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
* src: a (HTML-piggybacked) metadata-attribute of a 3D object which instances content
* href: a (HTML-piggybacked) metadata-attribute of a 3D object which links to content
* query: an URI Fragment-operator which queries object(s) from a scene (`#q=cube`)
* [visual-meta](https://visual.meta.info): metadata appended to text which is only indirectly visible/editable in XR.
{::boilerplate bcp14-tagged}
# List of URI Fragments
| fragment | type | example | info |
|--------------|----------|---------------|------------------------------------------------------|
| #pos | vector3 | #pos=0.5,0,0 | positions camera to xyz-coord 0.5,0,0 |
| #rot | vector3 | #rot=0,90,0 | rotates camera to xyz-coord 0.5,0,0 |
| #t | vector2 | #t=500,1000 | sets animation-loop range between frame 500 and 1000 |
# List of metadata for 3D nodes
| key | type | example | info |
|--------------|----------|-----------------|--------------------------------------------------------|
| name | string | name: "cube" | already available in all 3D fileformats & scenes |
| class | string | class: "cubes" | supported through custom property in 3D fileformats |
| href | string | href: "b.gltf" | supported through custom property in 3D fileformats |
| src | string | src: "#q=cube" | supported through custom property in 3D fileformats |
# Navigating 3D
Here's an ascii representation of a 3D scene-graph which contains 3D objects (`◻`) and their metadata:
```
index.gltf
├── ◻ buttonA
│ └ href: #pos=1,0,1&t=100,200
└── ◻ buttonB
└ href: other.fbx
+--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx |
| |
+--------------------------------------------------------+
```
@ -104,40 +139,184 @@ An XR Fragment-compatible browser viewing this scene, allows the end-user to int
In case of `buttonA` the end-user will be teleported to another location and time in the **current loaded scene**, but `buttonB` will
**replace the current scene** with a new one (`other.fbx`).
# Navigating text
TODO
# Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects (`◻`) which embeds remote & local 3D objects (`◻`) (without) using queries:
```
+------------------------------------------------------------+ +---------------------------+
| | | |
| index.gltf | | rescue.com/aquarium.gltf |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bassfish |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+------------------------------------------------------------+
+--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
```
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).
Also, after lazy-loading `rescue.com/aquarium.gltf`, only the queried objects `bassfish` and `tuna` will be instanced inside `aquariumcube`.
Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `bass` and `tuna` will be instanced inside `aquariumcube`.
Resizing will be happen accordingly to its placeholder object (`aquariumcube`), see chapter Scaling.
# Embedding text
Text in XR has to be unobtrusive, for readers as well as authors.
We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched *afterwards* (lazy metadata).
Therefore, **yet-another-markuplanguage** is not going to get us very far.
What will get us far, is when XR interfaces always guarantee direct feedbackloops between plainttext and humans.
Humans need to be always served first, and machines later.
In the next chapter you can see how XR Fragments enjoys hasslefree rich text, by adding [visual-meta](https://visual.meta.info)(data) support to plain text.
## Default Data URI mimetype
The XR Fragment specification bumps the traditional default browser-mimetype
`text/plain;charset=US-ASCII`
to:
`text/plain;charset=utf-8;visual-meta=1`
This means that [visual-meta](https://visual.meta.info)(data) can be appended to plain text without being displayed.
### URL and Data URI
```
+--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @{visual-meta-start} |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
| |
| |
+--------------------------------------------------------------+
```
The enduser will only see `welcome human` rendered spatially.
The beauty is that text (AND visual-meta) in Data URI is saved into the scene, which also promotes rich copy-paste.
In both cases will the text get rendered immediately (onto a plane geometry, hence the name '_canvas').
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).
> NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru `src`, it is just that `text/plain;charset=utf-8;visual-meta=1` is the default.
The mapping between 3D objects and text (src-data) is simple:
Example:
```
+------------------------------------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ AI |
| │ └ class: tech |
| │ |
| └ src:`data:@{visual-meta-start} |
| @{glossary-start} |
| @entry{ |
| name="AI", |
| alt-name1 = "Artificial Intelligence", |
| description="Artificial intelligence", |
| url = "https://en.wikipedia.org/wiki/Artificial_intelligence", |
| } |
| @entry{ |
| name="tech" |
| alt-name1="technology" |
| description="when monkeys start to play with things" |
| }` |
+------------------------------------------------------------------------------------+
```
Attaching visualmeta as `src` metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to `name` of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:
1. When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.
2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.
# HYPER copy/paste
The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.
XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:
* time/space: 3D object (current animation-loop)
* text: Text object (including visual-meta if any)
* interlinked: Collected objects by visual-meta tag
# XR Fragment queries
Include, exclude, hide/shows objects using space-separated strings:
* `#q=cube`
* `#q=cube -ball_inside_cube`
* `#q=* -sky`
* `#q=-.language .english`
* `#q=cube&rot=0,90,0`
* `#q=price:>2 price:<5`
It's simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:
1. queries are only executed when <b>embedded</b> in the asset/scene (thru `src`). This is to prevent sharing of scene-tampered URL's.
2. search words are matched against 3D object names or metadata-key(values)
3. `#` equals `#q=*`
4. words starting with `.` (`.language`) indicate class-properties
> *(*For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`.
* see [an example video here](https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
### including/excluding
|''operator'' | ''info'' |
|`*` | select all objects (only allowed in `src` custom property) in the <b>current</b> scene (<b>after</b> the default [[predefined_view|predefined_view]] `#` was executed)|
|`-` | removes/hides object(s) |
|`:` | indicates an object-embedded custom property key/value |
|`.` | alias for `class:` (`.foo` equals `class:foo` |
|`>` `<`| compare float or int number|
|`/` | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br>`#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)<br> `#q=-cube` hides both object `cube` in the root-scene <b>AND</b> nested `skybox` objects |
[» example implementation](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js)
[» example 3D asset](https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192)
[» discussion](https://github.com/coderofsalvation/xrfragment/issues/3)
## Query Parser
Here's how to write a query parser:
1. create an associative array/object to store query-arguments as objects
1. detect object id's & properties `foo:1` and `foo` (reference regex: `/^.*:[><=!]?/` )
1. detect excluders like `-foo`,`-foo:1`,`-.foo`,`-/foo` (reference regex: `/^-/` )
1. detect root selectors like `/foo` (reference regex: `/^[-]?\//` )
1. detect class selectors like `.foo` (reference regex: `/^[-]?class$/` )
1. detect number values like `foo:1` (reference regex: `/^[0-9\.]+$/` )
1. expand aliases like `.foo` into `class:foo`
1. for every query token split string on `:`
1. create an empty array `rules`
1. then strip key-operator: convert "-foo" into "foo"
1. add operator and value to rule-array
1. therefore we we set `id` to `true` or `false` (false=excluder `-`)
1. and we set `root` to `true` or `false` (true=`/` root selector is present)
1. we convert key '/foo' into 'foo'
1. finally we add the key/value to the store (`store.foo = {id:false,root:true}` e.g.)
> An example query-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx)
# List of XR URI Fragments
# Security Considerations

View File

@ -3,9 +3,9 @@
Internet Engineering Task Force L.R. van Kammen
Internet-Draft 31 August 2023
Internet-Draft 1 September 2023
Intended status: Informational
Expires: 3 March 2024
Expires: 4 March 2024
XR Fragments
@ -37,7 +37,7 @@ Status of This Memo
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on 3 March 2024.
This Internet-Draft will expire on 4 March 2024.
Copyright Notice
@ -53,9 +53,9 @@ Copyright Notice
van Kammen Expires 3 March 2024 [Page 1]
van Kammen Expires 4 March 2024 [Page 1]
Internet-Draft XR Fragments August 2023
Internet-Draft XR Fragments September 2023
This document is subject to BCP 78 and the IETF Trust's Legal
@ -73,21 +73,25 @@ Table of Contents
2. Conventions and Definitions . . . . . . . . . . . . . . . . . 2
3. Navigating 3D . . . . . . . . . . . . . . . . . . . . . . . . 3
4. Navigating text . . . . . . . . . . . . . . . . . . . . . . . 3
5. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 3
6. Embedding text . . . . . . . . . . . . . . . . . . . . . . . 4
7. List of XR URI Fragments . . . . . . . . . . . . . . . . . . 4
8. Security Considerations . . . . . . . . . . . . . . . . . . . 4
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 4
10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 4
4.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 4
4.1.1. URL and Data URI . . . . . . . . . . . . . . . . . . 4
4.2. omnidirectional XR annotations . . . . . . . . . . . . . 5
5. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 5
5.1. Plain Text (with optional visual-meta) . . . . . . . . . 6
6. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 6
7. List of XR URI Fragments . . . . . . . . . . . . . . . . . . 7
8. Security Considerations . . . . . . . . . . . . . . . . . . . 7
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 7
10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 7
1. Introduction
How can we add more features to existing text & 3D scenes, without
introducing new dataformats? Historically, there's many attempts to
create the ultimate markuplanguage or 3D fileformat. However, thru
the lens of authorina,g their lowest common denominator is still:
plain text. XR Fragments allows us to enrich existing dataformats,
by recursive use of existing technologies:
the lens of authoring their lowest common denominator is still: plain
text. XR Fragments allows us to enrich existing dataformats, by
recursive use of existing technologies:
* addressibility & navigation of 3D objects: URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) + (src/href) metadata
@ -102,20 +106,22 @@ Table of Contents
* metadata: custom properties defined in 3D Scene or Object(nodes)
* XR fragment: URI Fragment with spatial hints (#pos=0,0,0&t=1,100
e.g.)
van Kammen Expires 4 March 2024 [Page 2]
Internet-Draft XR Fragments September 2023
* src: a (HTML-piggybacked) metadata-attribute of a 3D object which
instances content
* href: a (HTML-piggybacked) metadata-attribute of a 3D object which
links to content
van Kammen Expires 3 March 2024 [Page 2]
Internet-Draft XR Fragments August 2023
* query: an URI Fragment-operator which queries object(s) from a
scene (#q=cube)
* visual-meta (https://visual.meta.info): metadata appended to text
which is only indirectly visible/editable in XR.
{::boilerplate bcp14-tagged}
@ -124,13 +130,17 @@ Internet-Draft XR Fragments August 2023
Here's an ascii representation of a 3D scene-graph which contains 3D
objects (&#9723;) and their metadata:
index.gltf
├── ◻ buttonA
│ └ href: #pos=1,0,1&t=100,200
└── ◻ buttonB
└ href: other.fbx
+--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx |
| |
+--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, allows the end-
user to interact with the buttonA and buttonB. In case of buttonA
@ -140,14 +150,180 @@ Internet-Draft XR Fragments August 2023
4. Navigating text
TODO
Text in XR has to be unobtrusive, for readers as well as authors. We
think and speak in simple text, and given the new paradigm of XR
interfaces, logically (spoken) text must be enriched _afterwards_
(lazy metadata). Therefore, XR Fragment-compliant text will just be
plain text, and *not yet-another-markuplanguage*. In contrast to
markup languages, this means humans need to be always served first,
and machines later.
5. Embedding 3D content
| Basically, a direct feedbackloop between unobtrusive text and
| human eye.
van Kammen Expires 4 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023
Reality has shown that outsourcing rich textmanipulation to
commercial formats or mono-markup browsers (HTML) have there
usecases, but also introduce barriers to thought-translation (which
uses simple words). As Marshall MCluhan said: we have become
irrevocably involved with, and responsible for, each other.
In order enjoy hasslefree batteries-included programmable text
(glossaries, flexible views, drag-drop e.g.), XR Fragment supports
visual-meta (https://visual.meta.info)(data).
4.1. Default Data URI mimetype
The XR Fragment specification bumps the traditional default browser-
mimetype
text/plain;charset=US-ASCII
into:
text/plain;charset=utf-8;visual-meta=1
This means that visual-meta (https://visual.meta.info)(data) can be
appended to plain text without being displayed.
4.1.1. URL and Data URI
+--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @{visual-meta-start} |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
| |
| |
+--------------------------------------------------------------+
The difference is that text (+visual-meta data) in Data URI is saved
into the scene, which also promotes rich copy-paste. In both cases
will the text get rendered immediately (onto a plane geometry, hence
the name '_canvas'). The enduser can access visual-meta(data)-fields
only after interacting with the object.
| NOTE: this is not to say that XR Browsers should not load
| HTML/PDF/etc-URLs thru src-metadata, it is just that text/
| plain;charset=utf-8;visual-meta=1 is the minimum requirement.
van Kammen Expires 4 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023
4.2. omnidirectional XR annotations
+---------------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ todo |
| │ └ src:`data:learn about ARC @{visual-meta-start}...`|
| │ |
| └── ◻ ARC |
| └── ◻ plane |
| └ src: `data:ARC was revolutionary |
| @{visual-meta-start} |
| @{glossary-start} |
| @entry{ |
| name = {ARC}, |
| description = {Engelbart Concept: |
| Augmentation Research Center, |
| The name of Doug's lab at SRI. |
| }, |
| }` |
| |
+---------------------------------------------------------------+
Here we can see an 3D object of ARC, to which the enduser added a
textnote (basically a plane geometry with src). The enduser can
view/edit visual-meta(data)-fields only after interacting with the
object. This allows the 3D scene to perform omnidirectional features
for free, by omni-connecting the word 'ARC':
* the ARC object can draw a line to the 'ARC was revolutionary'-note
* the 'ARC was revolutionary'-note can draw line to the 'learn about
ARC'-note
* the 'learn about ARC'-note can draw a line to the ARC 3D object
5. HYPER copy/paste
The previous example, offers something exciting compared to simple
textual copy-paste. , XR Fragment offers 4D- and HYPER- copy/paste:
time, space and text interlinked. Therefore, the enduser in an XR
Fragment-compatible browser can copy/paste/share data in these ways:
* copy ARC 3D object (incl. animation) & paste elsewhere including
visual-meta(data)
* select the word ARC in any text, and paste a bundle of anything
ARC-related
van Kammen Expires 4 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
5.1. Plain Text (with optional visual-meta)
In contrast to markuplanguage, the (dictated/written) text needs no
parsing, stays intact, by postponing metadata to the appendix.
This allows for a very economic XR way to:
* directly write, dictate, render text (=fast, without markup-
parser-overhead)
* add/load metadata later (if provided)
* enduser interactions with text (annotations,mutations) can be
reflected back into the visual-meta(data) Data URI
* copy/pasting of text will automatically cite the (mutated) source
* allows annotating 3D objects as if they were textual
representations (convert 3D document to text)
| NOTE: visualmeta never breaks the original intended text (in
| contrast to forgetting a html closing-tag e.g.)
6. Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects
(&#9723;) which embeds remote & local 3D objects (&#9723;) (without)
using queries:
+--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
@ -157,47 +333,19 @@ Internet-Draft XR Fragments August 2023
van Kammen Expires 3 March 2024 [Page 3]
van Kammen Expires 4 March 2024 [Page 6]
Internet-Draft XR Fragments August 2023
Internet-Draft XR Fragments September 2023
+------------------------------------------------------------+ +---------------------------+
| | | |
| index.gltf | | rescue.com/aquarium.gltf |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bassfish |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+------------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, lazy-loads and
projects painting.png onto the (plane) object called canvas (which is
copy-instanced in the bed and livingroom). Also, after lazy-loading
rescue.com/aquarium.gltf, only the queried objects bassfish and tuna
will be instanced inside aquariumcube. Resizing will be happen
ocean.com/aquarium.gltf, only the queried objects bass and tuna will
be instanced inside aquariumcube. Resizing will be happen
accordingly to its placeholder object (aquariumcube), see chapter
Scaling.
6. Embedding text
7. List of XR URI Fragments
8. Security Considerations
@ -221,4 +369,24 @@ Internet-Draft XR Fragments August 2023
van Kammen Expires 3 March 2024 [Page 4]
van Kammen Expires 4 March 2024 [Page 7]

View File

@ -15,19 +15,15 @@ The specification promotes spatial addressibility, sharing, navigation, query-in
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies like <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> &amp; <eref target="https://visual-meta.info">visual-meta</eref>.</t>
</abstract>
</front>
<middle>
<section anchor="introduction"><name>Introduction</name>
<t>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
However, thru the lens of authorina,g their lowest common denominator is still: plain text.
However, thru the lens of authoring their lowest common denominator is still: plain text.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:</t>
<ul spacing="compact">
<li>addressibility &amp; navigation of 3D objects: <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> + (src/href) metadata</li>
<li>addressibility &amp; navigation of text objects: <eref target="https://visual-meta.info">visual-meta</eref></li>
<li>bi-directional links between text and spatial objects: <eref target="https://visual-meta.info">visual-meta</eref></li>
</ul>
</section>
@ -41,6 +37,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<li>src: a (HTML-piggybacked) metadata-attribute of a 3D object which instances content</li>
<li>href: a (HTML-piggybacked) metadata-attribute of a 3D object which links to content</li>
<li>query: an URI Fragment-operator which queries object(s) from a scene (<tt>#q=cube</tt>)</li>
<li><eref target="https://visual.meta.info">visual-meta</eref>: metadata appended to text which is only indirectly visible/editable in XR.</li>
</ul>
<t>{::boilerplate bcp14-tagged}</t>
</section>
@ -48,13 +45,17 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<section anchor="navigating-3d"><name>Navigating 3D</name>
<t>Here's an ascii representation of a 3D scene-graph which contains 3D objects (<tt></tt>) and their metadata:</t>
<artwork> index.gltf
├── ◻ buttonA
│ └ href: #pos=1,0,1&amp;t=100,200
└── ◻ buttonB
└ href: other.fbx
<artwork> +--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&amp;t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx |
| |
+--------------------------------------------------------+
</artwork>
<t>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <tt>buttonA</tt> and <tt>buttonB</tt>.
@ -62,37 +63,180 @@ In case of <tt>buttonA</tt> the end-user will be teleported to another location
<strong>replace the current scene</strong> with a new one (<tt>other.fbx</tt>).</t>
</section>
<section anchor="navigating-text"><name>Navigating text</name>
<t>TODO</t>
</section>
<section anchor="embedding-3d-content"><name>Embedding 3D content</name>
<t>Here's an ascii representation of a 3D scene-graph with 3D objects (<tt></tt>) which embeds remote &amp; local 3D objects (<tt></tt>) (without) using queries:</t>
<artwork> +------------------------------------------------------------+ +---------------------------+
| | | |
| index.gltf | | rescue.com/aquarium.gltf |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bassfish |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+------------------------------------------------------------+
<artwork> +--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
</artwork>
<t>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <tt>painting.png</tt> onto the (plane) object called <tt>canvas</tt> (which is copy-instanced in the bed and livingroom).
Also, after lazy-loading <tt>rescue.com/aquarium.gltf</tt>, only the queried objects <tt>bassfish</tt> and <tt>tuna</tt> will be instanced inside <tt>aquariumcube</tt>.
Also, after lazy-loading <tt>ocean.com/aquarium.gltf</tt>, only the queried objects <tt>bass</tt> and <tt>tuna</tt> will be instanced inside <tt>aquariumcube</tt>.
Resizing will be happen accordingly to its placeholder object (<tt>aquariumcube</tt>), see chapter Scaling.</t>
</section>
<section anchor="embedding-text"><name>Embedding text</name>
<t>Text in XR has to be unobtrusive, for readers as well as authors.
We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched <em>afterwards</em> (lazy metadata).
Therefore, XR Fragment-compliant text will just be plain text, and <strong>not yet-another-markuplanguage</strong>.
In contrast to markup languages, this means humans need to be always served first, and machines later.</t>
<blockquote><t>Basically, XR interfaces work best when direct feedbackloops between unobtrusive text and humans are guaranteed.</t>
</blockquote><t>In the next chapter you can see how XR Fragments enjoys hasslefree rich text, by supporting <eref target="https://visual.meta.info">visual-meta</eref>(data).</t>
<section anchor="default-data-uri-mimetype"><name>Default Data URI mimetype</name>
<t>The XR Fragment specification bumps the traditional default browser-mimetype</t>
<t><tt>text/plain;charset=US-ASCII</tt></t>
<t>to:</t>
<t><tt>text/plain;charset=utf-8;visual-meta=1</tt></t>
<t>This means that <eref target="https://visual.meta.info">visual-meta</eref>(data) can be appended to plain text without being displayed.</t>
<section anchor="url-and-data-uri"><name>URL and Data URI</name>
<artwork> +--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @{visual-meta-start} |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
| |
| |
+--------------------------------------------------------------+
</artwork>
<t>The enduser will only see <tt>welcome human</tt> rendered spatially.
The beauty is that text (AND visual-meta) in Data URI is saved into the scene, which also promotes rich copy-paste.
In both cases will the text get rendered immediately (onto a plane geometry, hence the name '_canvas').
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).</t>
<blockquote><t>NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru <tt>src</tt>, it is just that <tt>text/plain;charset=utf-8;visual-meta=1</tt> is the default.</t>
</blockquote><t>The mapping between 3D objects and text (src-data) is simple:</t>
<t>Example:</t>
<artwork> +------------------------------------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ AI |
| │ └ class: tech |
| │ |
| └ src:`data:@{visual-meta-start} |
| @{glossary-start} |
| @entry{ |
| name=&quot;AI&quot;, |
| alt-name1 = &quot;Artificial Intelligence&quot;, |
| description=&quot;Artificial intelligence&quot;, |
| url = &quot;https://en.wikipedia.org/wiki/Artificial_intelligence&quot;, |
| } |
| @entry{ |
| name=&quot;tech&quot; |
| alt-name1=&quot;technology&quot; |
| description=&quot;when monkeys start to play with things&quot; |
| }` |
+------------------------------------------------------------------------------------+
</artwork>
<t>Attaching visualmeta as <tt>src</tt> metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to <tt>name</tt> of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:</t>
<ol spacing="compact">
<li>When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.</li>
</ol>
</section>
</section>
</section>
<section anchor="hyper-copy-paste"><name>HYPER copy/paste</name>
<t>The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.
XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</t>
<ul spacing="compact">
<li>time/space: 3D object (current animation-loop)</li>
<li>text: Text object (including visual-meta if any)</li>
<li>interlinked: Collected objects by visual-meta tag</li>
</ul>
</section>
<section anchor="xr-fragment-queries"><name>XR Fragment queries</name>
<t>Include, exclude, hide/shows objects using space-separated strings:</t>
<ul spacing="compact">
<li><tt>#q=cube</tt></li>
<li><tt>#q=cube -ball_inside_cube</tt></li>
<li><tt>#q=* -sky</tt></li>
<li><tt>#q=-.language .english</tt></li>
<li><tt>#q=cube&amp;rot=0,90,0</tt></li>
<li><tt>#q=price:&gt;2 price:&lt;5</tt></li>
</ul>
<t>It's simple but powerful syntax which allows &lt;b&gt;css&lt;/b&gt;-like class/id-selectors with a searchengine prompt-style feeling:</t>
<ol spacing="compact">
<li>queries are only executed when &lt;b&gt;embedded&lt;/b&gt; in the asset/scene (thru <tt>src</tt>). This is to prevent sharing of scene-tampered URL's.</li>
<li>search words are matched against 3D object names or metadata-key(values)</li>
<li><tt>#</tt> equals <tt>#q=*</tt></li>
<li>words starting with <tt>.</tt> (<tt>.language</tt>) indicate class-properties</li>
</ol>
<blockquote><t>*(*For example**: <tt>#q=.foo</tt> is a shorthand for <tt>#q=class:foo</tt>, which will select objects with custom property <tt>class</tt>:<tt>foo</tt>. Just a simple <tt>#q=cube</tt> will simply select an object named <tt>cube</tt>.</t>
</blockquote>
<ul spacing="compact">
<li>see <eref target="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an example video here</eref></li>
</ul>
<section anchor="including-excluding"><name>including/excluding</name>
<t>|''operator'' | ''info'' |
|<tt>*</tt> | select all objects (only allowed in <tt>src</tt> custom property) in the &lt;b&gt;current&lt;/b&gt; scene (&lt;b&gt;after&lt;/b&gt; the default [[predefined_view|predefined_view]] <tt>#</tt> was executed)|
|<tt>-</tt> | removes/hides object(s) |
|<tt>:</tt> | indicates an object-embedded custom property key/value |
|<tt>.</tt> | alias for <tt>class:</tt> (<tt>.foo</tt> equals <tt>class:foo</tt> |
|<tt>&gt;</tt> <tt>&lt;</tt>| compare float or int number|
|<tt>/</tt> | reference to root-scene.<br />
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br />
<tt>#q=-/cube</tt> hides object <tt>cube</tt> only in the root-scene (not nested <tt>cube</tt> objects)<br />
<tt>#q=-cube</tt> hides both object <tt>cube</tt> in the root-scene &lt;b&gt;AND&lt;/b&gt; nested <tt>skybox</tt> objects |</t>
<t><eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</eref>
<eref target="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</eref>
<eref target="https://github.com/coderofsalvation/xrfragment/issues/3">» discussion</eref></t>
</section>
</section>
<section anchor="query-parser"><name>Query Parser</name>
<t>Here's how to write a query parser:</t>
<ol spacing="compact">
<li>create an associative array/object to store query-arguments as objects</li>
<li>detect object id's &amp; properties <tt>foo:1</tt> and <tt>foo</tt> (reference regex: <tt>/^.*:[&gt;&lt;=!]?/</tt> )</li>
<li>detect excluders like <tt>-foo</tt>,<tt>-foo:1</tt>,<tt>-.foo</tt>,<tt>-/foo</tt> (reference regex: <tt>/^-/</tt> )</li>
<li>detect root selectors like <tt>/foo</tt> (reference regex: <tt>/^[-]?\//</tt> )</li>
<li>detect class selectors like <tt>.foo</tt> (reference regex: <tt>/^[-]?class$/</tt> )</li>
<li>detect number values like <tt>foo:1</tt> (reference regex: <tt>/^[0-9\.]+$/</tt> )</li>
<li>expand aliases like <tt>.foo</tt> into <tt>class:foo</tt></li>
<li>for every query token split string on <tt>:</tt></li>
<li>create an empty array <tt>rules</tt></li>
<li>then strip key-operator: convert &quot;-foo&quot; into &quot;foo&quot;</li>
<li>add operator and value to rule-array</li>
<li>therefore we we set <tt>id</tt> to <tt>true</tt> or <tt>false</tt> (false=excluder <tt>-</tt>)</li>
<li>and we set <tt>root</tt> to <tt>true</tt> or <tt>false</tt> (true=<tt>/</tt> root selector is present)</li>
<li>we convert key '/foo' into 'foo'</li>
<li>finally we add the key/value to the store (<tt>store.foo = {id:false,root:true}</tt> e.g.)</li>
</ol>
<blockquote><t>An example query-parser (which compiles to many languages) can be <eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</eref></t>
</blockquote></section>
</section>
<section anchor="list-of-xr-uri-fragments"><name>List of XR URI Fragments</name>
@ -110,6 +254,6 @@ Resizing will be happen accordingly to its placeholder object (<tt>aquariumcube<
<t>TODO acknowledge.</t>
</section>
</middle>
</front>
</rfc>

View File

@ -3,5 +3,5 @@ set -e
mmark RFC_XR_Fragments.md > RFC_XR_Fragments.xml
xml2rfc --v3 RFC_XR_Fragments.xml # RFC_XR_Fragments.txt
mmark --html RFC.template.md | grep -vE '(<!--{|}-->)' > RFC_XR_Fragments.html
mmark --html RFC_XR_Fragments.md | grep -vE '(<!--{|}-->)' > RFC_XR_Fragments.html
#sed 's|visual-meta|<a href="https://visual-meta.org">visual-meta</a>|g' -i RFC_XR_Fragments.html