Internet Engineering Task Force L.R. van Kammen
Internet-Draft 9 September 2023
Intended status: Informational
XR Fragments
draft-XRFRAGMENTS-leonvankammen-00
Abstract
This draft offers a specification for 4D URLs & navigation, to link
3D scenes and text together with- or without a network-connection.
The specification promotes spatial addressibility, sharing,
navigation, query-ing and tagging interactive (text)objects across
for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive
use of existing proven technologies like URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) and BibTags notation.
Almost every idea in this document is demonstrated at
https://xrfragment.org (https://xrfragment.org)
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on 12 March 2024.
Copyright Notice
Copyright (c) 2023 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents (https://trustee.ietf.org/
license-info) in effect on the date of publication of this document.
van Kammen Expires 12 March 2024 [Page 1]
Internet-Draft XR Fragments September 2023
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
extracted from this document must include Revised BSD License text as
described in Section 4.e of the Trust Legal Provisions and are
provided without warranty as described in the Revised BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Core principle . . . . . . . . . . . . . . . . . . . . . . . 3
3. Conventions and Definitions . . . . . . . . . . . . . . . . . 3
4. List of URI Fragments . . . . . . . . . . . . . . . . . . . . 3
5. List of metadata for 3D nodes . . . . . . . . . . . . . . . . 4
6. Navigating 3D . . . . . . . . . . . . . . . . . . . . . . . . 4
7. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 5
8. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 5
8.1. including/excluding . . . . . . . . . . . . . . . . . . . 6
8.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 7
8.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . . 7
9. Text in XR (tagging,linking to spatial objects) . . . . . . . 8
9.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 11
9.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 12
9.3. Bibs & BibTeX: lowest common denominator for linking
data . . . . . . . . . . . . . . . . . . . . . . . . . . 13
9.4. XR Text example parser . . . . . . . . . . . . . . . . . 15
10. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 17
11. Security Considerations . . . . . . . . . . . . . . . . . . . 17
12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17
13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17
14. Appendix: Definitions . . . . . . . . . . . . . . . . . . . . 18
1. Introduction
How can we add more features to existing text & 3D scenes, without
introducing new dataformats?
Historically, there's many attempts to create the ultimate
markuplanguage or 3D fileformat.
Their lowest common denominator is: (co)authoring using plain text.
XR Fragments allows us to enrich/connect existing dataformats, by
recursive use of existing technologies:
1. addressibility and navigation of 3D scenes/objects: URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial
metadata
2. hasslefree tagging across text and spatial objects using bibs
(https://github.com/coderofsalvation/tagbibs) / BibTags
(https://en.wikipedia.org/wiki/BibTeX) appendices (see visual-
meta (https://visual-meta.info) e.g.)
van Kammen Expires 12 March 2024 [Page 2]
Internet-Draft XR Fragments September 2023
| NOTE: The chapters in this document are ordered from highlevel to
| lowlevel (technical) as much as possible
2. Core principle
XR Fragments strives to serve (nontechnical/fuzzy) humans first, and
machine(implementations) later, by ensuring hasslefree text-vs-
thought feedback loops.
This also means that the repair-ability of machine-matters should be
human friendly too (not too complex).
| "When a car breaks down, the ones *without* turbosupercharger are
| easier to fix"
Let's always focus on average humans: our fuzzy symbolical mind must
be served first, before serving a greater categorized typesafe RDF
hive mind (https://en.wikipedia.org/wiki/Borg)).
| Humans first, machines (AI) later.
Thererfore, XR Fragments does not look at XR (or the web) thru the
lens of HTML.
XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment
browsers *can* be implemented on top of HTML/Javascript.
3. Conventions and Definitions
See appendix below in case certain terms are not clear.
4. List of URI Fragments
+==========+=========+==============+============================+
| fragment | type | example | info |
+==========+=========+==============+============================+
| #pos | vector3 | #pos=0.5,0,0 | positions camera to xyz- |
| | | | coord 0.5,0,0 |
+----------+---------+--------------+----------------------------+
| #rot | vector3 | #rot=0,90,0 | rotates camera to xyz- |
| | | | coord 0.5,0,0 |
+----------+---------+--------------+----------------------------+
| #t | vector2 | #t=500,1000 | sets animation-loop range |
| | | | between frame 500 and 1000 |
+----------+---------+--------------+----------------------------+
| #...... | string | #.cubes | object(s) of interest |
| | | #cube | (fragment to object name |
| | | | or class mapping) |
+----------+---------+--------------+----------------------------+
van Kammen Expires 12 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023
Table 1
| xyz coordinates are similar to ones found in SVG Media Fragments
5. List of metadata for 3D nodes
+=======+========+================+============================+
| key | type | example (JSON) | info |
+=======+========+================+============================+
| name | string | "name": "cube" | available in all 3D |
| | | | fileformats & scenes |
+-------+--------+----------------+----------------------------+
| class | string | "class": | available through custom |
| | | "cubes" | property in 3D fileformats |
+-------+--------+----------------+----------------------------+
| href | string | "href": | available through custom |
| | | "b.gltf" | property in 3D fileformats |
+-------+--------+----------------+----------------------------+
| src | string | "src": | available through custom |
| | | "#q=cube" | property in 3D fileformats |
+-------+--------+----------------+----------------------------+
Table 2
Popular compatible 3D fileformats: .gltf, .obj, .fbx, .usdz, .json
(THREE.js), .dae and so on.
| NOTE: XR Fragments are file-agnostic, which means that the
| metadata exist in programmatic 3D scene(nodes) too.
6. Navigating 3D
Here's an ascii representation of a 3D scene-graph which contains 3D
objects ◻ and their metadata:
+--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx | <-- file-agnostic (can be .gltf .obj etc)
| |
+--------------------------------------------------------+
van Kammen Expires 12 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023
An XR Fragment-compatible browser viewing this scene, allows the end-
user to interact with the buttonA and buttonB.
In case of buttonA the end-user will be teleported to another
location and time in the *current loaded scene*, but buttonB will
*replace the current scene* with a new one, like other.fbx.
7. Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects
◻ which embeds remote & local 3D objects ◻ with/out using
queries:
+--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, lazy-loads and
projects painting.png onto the (plane) object called canvas (which is
copy-instanced in the bed and livingroom).
Also, after lazy-loading ocean.com/aquarium.gltf, only the queried
objects bass and tuna will be instanced inside aquariumcube.
Resizing will be happen accordingly to its placeholder object
aquariumcube, see chapter Scaling.
8. XR Fragment queries
Include, exclude, hide/shows objects using space-separated strings:
* #q=cube
* #q=cube -ball_inside_cube
* #q=* -sky
* #q=-.language .english
* #q=cube&rot=0,90,0
* #q=price:>2 price:<5
van Kammen Expires 12 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
It's simple but powerful syntax which allows css-like class/
id-selectors with a searchengine prompt-style feeling:
1. queries are showing/hiding objects *only* when defined as src
value (prevents sharing of scene-tampered URL's).
2. queries are highlighting objects when defined in the top-Level
(browser) URL (bar).
3. search words like cube and foo in #q=cube foo are matched against
3D object names or custom metadata-key(values)
4. search words like cube and foo in #q=cube foo are matched against
tags (BibTeX) inside plaintext src values like @cube{redcube, ...
e.g.
5. # equals #q=*
6. words starting with . like .german match class-metadata of 3D
objects like "class":"german"
7. words starting with . like .german match class-metadata of
(BibTeX) tags in XR Text objects like @german{KarlHeinz, ... e.g.
| *For example*: #q=.foo is a shorthand for #q=class:foo, which will
| select objects with custom property class:foo. Just a simple
| #q=cube will simply select an object named cube.
* see an example video here
(https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
8.1. including/excluding
+==========+=================================================+
| operator | info |
+==========+=================================================+
| * | select all objects (only useful in src custom |
| | property) |
+----------+-------------------------------------------------+
| - | removes/hides object(s) |
+----------+-------------------------------------------------+
| : | indicates an object-embedded custom property |
| | key/value |
+----------+-------------------------------------------------+
| . | alias for "class" :".foo" equals class:foo |
+----------+-------------------------------------------------+
| > < | compare float or int number |
+----------+-------------------------------------------------+
| / | reference to root-scene. |
| | Useful in case of (preventing) showing/hiding |
| | objects in nested scenes (instanced by src) (*) |
+----------+-------------------------------------------------+
Table 3
van Kammen Expires 12 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
| * = #q=-/cube hides object cube only in the root-scene (not nested
| cube objects)
| #q=-cube hides both object cube in the root-scene AND
| nested skybox objects |
» example implementation
(https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/
three/xrf/q.js) » example 3D asset
(https://github.com/coderofsalvation/xrfragment/blob/main/example/
assets/query.gltf#L192) » discussion
(https://github.com/coderofsalvation/xrfragment/issues/3)
8.2. Query Parser
Here's how to write a query parser:
1. create an associative array/object to store query-arguments as
objects
2. detect object id's & properties foo:1 and foo (reference regex:
/^.*:[><=!]?/ )
3. detect excluders like -foo,-foo:1,-.foo,-/foo (reference regex:
/^-/ )
4. detect root selectors like /foo (reference regex: /^[-]?\// )
5. detect class selectors like .foo (reference regex: /^[-]?class$/
)
6. detect number values like foo:1 (reference regex: /^[0-9\.]+$/ )
7. expand aliases like .foo into class:foo
8. for every query token split string on :
9. create an empty array rules
10. then strip key-operator: convert "-foo" into "foo"
11. add operator and value to rule-array
12. therefore we we set id to true or false (false=excluder -)
13. and we set root to true or false (true=/ root selector is
present)
14. we convert key '/foo' into 'foo'
15. finally we add the key/value to the store like store.foo =
{id:false,root:true} e.g.
| An example query-parser (which compiles to many languages) can be
| found here
| (https://github.com/coderofsalvation/xrfragment/blob/main/src/
| xrfragment/Query.hx)
8.3. XR Fragment URI Grammar
reserved = gen-delims / sub-delims
gen-delims = "#" / "&"
sub-delims = "," / "="
van Kammen Expires 12 March 2024 [Page 7]
Internet-Draft XR Fragments September 2023
| Example: ://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100
+=============================+=================================+
| Demo | Explanation |
+=============================+=================================+
| pos=1,2,3 | vector/coordinate argument e.g. |
+-----------------------------+---------------------------------+
| pos=1,2,3&rot=0,90,0&q=.foo | combinators |
+-----------------------------+---------------------------------+
Table 4
9. Text in XR (tagging,linking to spatial objects)
We still think and speak in simple text, not in HTML or RDF.
The most advanced human will probably not shout
FIRE!
in
case of emergency.
Given the new dawn of (non-keyboard) XR interfaces, keeping text as
is (not obscuring with markup) is preferred.
Ideally metadata must come *with* text, but not *obfuscate* the text,
or *in another* file.
This way:
1. XR Fragments allows hasslefree spatial
tagging, by detecting BibTeX metadata *at the end of content*
of text (see default mimetype & Data URI)
2. XR Fragments allows hasslefree spatial
tagging, by treating 3D object name/class-pairs as BibTeX
tags.
3. XR Fragments allows hasslefree textual
tagging, spatial tagging, and supra tagging, by mapping 3D/text
object (class)names using BibTeX 'tags'
4. BibTex & Hashtagbibs are the first-choice *requestless metadata*-
layer for XR text, HTML/RDF/JSON is great (but fits better in the
application-layer)
5. Default font (unless specified otherwise) is a modern monospace
font, for maximized tabular expressiveness (see the core
principle (#core-principle)).
6. anti-pattern: hardcoupling a mandatory *obtrusive markuplanguage*
or framework with an XR browsers (HTML/VRML/Javascript) (see the
core principle (#core-principle))
7. anti-pattern: limiting human introspection, by immediately
funneling human thought into typesafe, precise, pre-categorized
metadata like RDF (see the core principle (#core-principle))
van Kammen Expires 12 March 2024 [Page 8]
Internet-Draft XR Fragments September 2023
This allows recursive connections between text itself, as well as 3D
objects and vice versa, using *BibTags* :
http://y.io/z.fbx | (Evaluated) BibTex/ 'wires' / tags |
----------------------------------------------------------------------------+-------------------------------------
| @house{castle,
+-[src: data:.....]----------------------+ +-[3D mesh]-+ | url = {https://y.io/z.fbx#castle}
| My Notes | | / \ | | }
| | | / \ | | @baroque{castle,
| The houses are built in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle}
| | | |_____| | | }
| @house{baroque, | +-----│-----+ | @house{baroque,
| description = {classic} | ├─ name: castle | description = {classic}
| } | └─ class: house baroque | }
+----------------------------------------+ | @house{contactowner,
| }
+-[remotestorage.io / localstorage]------+ | @todo{contactowner,
| #contactowner@todo@house | | }
| ... | |
+----------------------------------------+ |
BibTex (generated from 3D objects), can be extended by the enduser
with personal BiBTex or hashtagbibs
(https://github.com/coderofsalvation/hashtagbibs).
| hashtagbibs (https://github.com/coderofsalvation/hashtagbibs)
| allows the enduser to add 'postit' connections (compressed BibTex)
| by speaking/typing/scanning text, which the XR Browser saves to
| remotestorage (or localStorage per toplevel URL). As well as,
| referencing BibTags per URI later on: https://y.io/
| z.fbx#@baroque@todo e.g.
Obviously, expressing the relationships above in XML/JSON instead of
BibTeX, would cause instant cognitive overload.
The This allows instant realtime filtering of relationships at
various levels:
van Kammen Expires 12 March 2024 [Page 9]
Internet-Draft XR Fragments September 2023
+====================================+============================+
| scope | matching algo |
+====================================+============================+
| textual | is now automatically |
| | tagged with 'house' (incl. |
| | plaintext src child nodes) |
+------------------------------------+----------------------------+
| spatial | name baroque or |
| | "class":"house" are now |
| | automatically tagged with |
| | 'house' (incl. child |
| | nodes) |
+------------------------------------+----------------------------+
| supra | text- or spatial-object(s) |
| | (non-descendant nodes) |
| | elsewhere, (class)named |
| | 'baroque' or 'house', are |
| | automatically tagged with |
| | 'house' (current node to |
| | root nodes) |
+------------------------------------+----------------------------+
| omni | text- or spatial-object(s) |
| | (non-descendant nodes) |
| | elsewhere, (class)named |
| | 'baroque' or 'house', are |
| | automatically tagged with |
| | 'house' (too node to all |
| | nodes) |
+------------------------------------+----------------------------+
| infinite | (non-descendant nodes) |
| | elsewhere, (class)named |
| | 'baroque' or 'house', are |
| | automatically tagged with |
| | 'house' (too node to all |
| | nodes) |
+------------------------------------+----------------------------+
Table 5
BibTex allows the enduser to adjust different levels of associations
(see the core principle (#core-principle)): spatial wires can be
rendered, words can be highlighted, spatial objects can be
highlighted/moved/scaled, links can be manipulated by the user.
van Kammen Expires 12 March 2024 [Page 10]
Internet-Draft XR Fragments September 2023
| NOTE: infinite matches both 'baroque' and 'style'-occurences in
| text, as well as spatial objects with "class":"style" or name
| "baroque". This multiplexing of id/category is deliberate because
| of the core principle (#core-principle).
8. The XR Browser needs to adjust tag-scope based on the endusers
needs/focus (infinite tagging only makes sense when environment
is scaled down significantly)
9. The XR Browser should always allow the human to view/edit the
metadata, by clicking 'toggle metadata' on the 'back'
(contextmenu e.g.) of any XR text, anywhere anytime.
| The simplicity of appending BibTeX (and leveling the metadata-
| playfield between humans and machines) is also demonstrated by
| visual-meta (https://visual-meta.info) in greater detail.
9.1. Default Data URI mimetype
The src-values work as expected (respecting mime-types), however:
The XR Fragment specification bumps the traditional default browser-
mimetype
text/plain;charset=US-ASCII
to a hashtagbib(tex)-friendly one:
text/plain;charset=utf-8;bib=^@
This indicates that:
* utf-8 is supported by default
* hashtagbibs (https://github.com/coderofsalvation/hashtagbibs) are
expanded to bibtags (https://en.wikipedia.org/wiki/BibTeX)
* lines matching regex ^@ will automatically get filtered out, in
order to:
* links between textual/spatial objects can automatically be
detected
* bibtag appendices (visual-meta (https://visual-meta.info) can be
interpreted e.g.
| for more info on this mimetype see bibs
| (https://github.com/coderofsalvation/hashtagbibs)
Advantages:
* out-of-the-box (de)multiplex human text and metadata in one go
(see the core principle (#core-principle))
van Kammen Expires 12 March 2024 [Page 11]
Internet-Draft XR Fragments September 2023
* no network-overhead for metadata (see the core principle (#core-
principle))
* ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy'
for game studios
* rich send/receive/copy-paste everywhere by default, metadata being
retained (see the core principle (#core-principle))
* netto result: less webservices, therefore less servers, and
overall better FPS in XR
| This significantly expands expressiveness and portability of human
| tagged text, by *postponing machine-concerns to the end of the
| human text* in contrast to literal interweaving of content and
| markupsymbols (or extra network requests, webservices e.g.).
For all other purposes, regular mimetypes can be used (but are not
required by the spec).
9.2. URL and Data URI
+--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @friend{friends |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human\n@...` | | } |
| | +------------------------+
| |
+--------------------------------------------------------------+
The enduser will only see welcome human and Hello friends rendered
spatially (see mimetype). The beauty is that text in Data URI
automatically promotes rich copy-paste (retaining metadata). In both
cases, the text gets rendered immediately (onto a plane geometry,
hence the name '_canvas'). The XR Fragment-compatible browser can
let the enduser access visual-meta(data)-fields after interacting
with the object (contextmenu e.g.).
| additional tagging using bibs
| (https://github.com/coderofsalvation/hashtagbibs): to tag spatial
| object note_canvas with 'todo', the enduser can type or speak
| @note_canvas@todo
van Kammen Expires 12 March 2024 [Page 12]
Internet-Draft XR Fragments September 2023
9.3. Bibs & BibTeX: lowest common denominator for linking data
| "When a car breaks down, the ones *without* turbosupercharger are
| easier to fix"
Unlike XML or JSON, BibTex is typeless, unnested, and uncomplicated,
hence a great advantage for introspection.
It's a missing sensemaking precursor to extrospective RDF.
BibTeX-appendices are already used in the digital AND physical world
(academic books, visual-meta (https://visual-meta.info)), perhaps due
to its terseness & simplicity.
In that sense, it's one step up from the .ini fileformat (which has
never leaked into the physical world like BibTex):
1. frictionless copy/pasting (by
humans) of (unobtrusive) content AND metadata
2. an introspective 'sketchpad' for metadata, which can (optionally)
mature into RDF later
+================+=====================================+===============+
|characteristic |UTF8 Plain Text (with BibTeX) |RDF |
+================+=====================================+===============+
|perspective |introspective |extrospective |
+----------------+-------------------------------------+---------------+
|structure |fuzzy (sensemaking) |precise |
+----------------+-------------------------------------+---------------+
|space/scope |local |world |
+----------------+-------------------------------------+---------------+
|everything is |yes |no |
|text (string) | | |
+----------------+-------------------------------------+---------------+
|voice/paper- |bibs |no |
|friendly |(https://github.com/coderofsalvation/| |
| |hashtagbibs) | |
+----------------+-------------------------------------+---------------+
|leaves |yes |no |
|(dictated) text | | |
|intact | | |
+----------------+-------------------------------------+---------------+
|markup language |just an appendix |~4 different |
+----------------+-------------------------------------+---------------+
|polyglot format |no |yes |
+----------------+-------------------------------------+---------------+
|easy to copy/ |yes |up to |
|paste | |application |
|content+metadata| | |
+----------------+-------------------------------------+---------------+
|easy to write/ |yes |depends |
van Kammen Expires 12 March 2024 [Page 13]
Internet-Draft XR Fragments September 2023
|repair for | | |
|layman | | |
+----------------+-------------------------------------+---------------+
|easy to |yes (fits on A4 paper) |depends |
|(de)serialize | | |
+----------------+-------------------------------------+---------------+
|infrastructure |selfcontained (plain text) |(semi)networked|
+----------------+-------------------------------------+---------------+
|freeform |yes, terse |yes, verbose |
|tagging/ | | |
|annotation | | |
+----------------+-------------------------------------+---------------+
|can be appended |yes |up to |
|to text-content | |application |
+----------------+-------------------------------------+---------------+
|copy-paste text |yes |up to |
|preserves | |application |
|metadata | | |
+----------------+-------------------------------------+---------------+
|emoji |yes |depends on |
| | |encoding |
+----------------+-------------------------------------+---------------+
|predicates |free |semi pre- |
| | |determined |
+----------------+-------------------------------------+---------------+
|implementation/ |no |depends |
|network overhead| | |
+----------------+-------------------------------------+---------------+
|used in |yes (visual-meta) |no |
|(physical) | | |
|books/PDF | | |
+----------------+-------------------------------------+---------------+
|terse non-verb |yes |no |
|predicates | | |
+----------------+-------------------------------------+---------------+
|nested |no (but: BibTex rulers) |yes |
|structures | | |
+----------------+-------------------------------------+---------------+
Table 6
| To keep XR Fragments a lightweight spec, BibTeX is used for
| rudimentary text/spatial tagging (not JSON, RDF or a scripting
| language because they're harder to write/speak/repair.).
Of course, on an application-level JSON(LD / RDF) can still be used
at will, by embedding RDF-urls/data as custom properties (but is not
interpreted by this spec).
van Kammen Expires 12 March 2024 [Page 14]
Internet-Draft XR Fragments September 2023
9.4. XR Text example parser
1. The XR Fragments spec does not aim to harden the BiBTeX format
2. respect multi-line BibTex values because of the core principle
(#core-principle)
3. Respect hashtag(bibs) and rulers (like ${visual-meta-start})
according to the hashtagbibs spec
(https://github.com/coderofsalvation/hashtagbibs)
4. BibTeX snippets should always start in the beginning of a line
(regex: ^@), hence mimetype text/plain;charset=utf-8;bib=^@
Here's an XR Text (de)multiplexer in javascript, which ticks all the
above boxes:
xrtext = {
expandBibs: (text) => {
let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}}
text.replace( bibs.regex , (m,k,v) => {
tok = m.substr(1).split("@")
match = tok.shift()
if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` )
else if( match.substr(-1) == '#' )
bibs.tags[match] = `@{${match.replace(/#/,'')}}`
else bibs.tags[match] = `@${match}{${match},\n}`
})
return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n')
},
decode: (str) => {
// bibtex: ↓@ ↓ ↓property ↓end
let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
let tags = [], text='', i=0, prop=''
let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/)
for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ )
text += lines[i]+'\n'
bibtex = lines.join('\n').substr( text.length )
bibtex.split( pat[0] ).map( (t) => {
try{
let v = {}
if( !(t = t.trim()) ) return
if( tag = t.match( pat[1] ) ) tag = tag[0]
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
t = t.substr( tag.length )
t.split( pat[2] )
.map( kv => {
if( !(kv = kv.trim()) || kv == "}" ) return
van Kammen Expires 12 March 2024 [Page 15]
Internet-Draft XR Fragments September 2023
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )
})
tags.push( { k:tag, v } )
}catch(e){ console.error(e) }
})
return {text, tags}
},
encode: (text,tags) => {
let str = text+"\n"
for( let i in tags ){
let item = tags[i]
if( item.ruler ){
str += `@${item.ruler}\n`
continue;
}
str += `@${item.k}\n`
for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
str += `}\n`
}
return str
}
}
The above functions (de)multiplexe text/metadata, expands bibs,
(de)serialize bibtex (and all fits more or less on one A4 paper)
| above can be used as a startingpoint for LLVM's to translate/
| steelman to a more formal form/language.
str = `
hello world
here are some hashtagbibs followed by bibtex:
#world
#hello@greeting
#another-section#
@{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex
tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together
van Kammen Expires 12 March 2024 [Page 16]
Internet-Draft XR Fragments September 2023
This expands to the following (hidden by default) BibTex appendix:
hello world
here are some hashtagbibs followed by bibtex:
@{some-section}
@flap{
asdf = {1}
}
@world{world,
}
@greeting{hello,
}
@{another-section}
@bar{
abc = {123}
}
| when an XR browser updates the human text, a quick scan for
| nonmatching tags (@book{nonmatchingbook e.g.) should be performed
| and prompt the enduser for deleting them.
10. HYPER copy/paste
The previous example, offers something exciting compared to simple
copy/paste of 3D objects or text. XR Text according to the XR
Fragment spec, allows HYPER-copy/paste: time, space and text
interlinked. Therefore, the enduser in an XR Fragment-compatible
browser can copy/paste/share data in these ways:
1. time/space: 3D object (current animation-loop)
2. text: TeXt object (including BibTeX/visual-meta if any)
3. interlinked: Collected objects by visual-meta tag
11. Security Considerations
Since XR Text contains metadata too, the user should be able to set
up tagging-rules, so the copy-paste feature can :
* filter out sensitive data when copy/pasting (XR text with
class:secret e.g.)
12. IANA Considerations
This document has no IANA actions.
13. Acknowledgments
van Kammen Expires 12 March 2024 [Page 17]
Internet-Draft XR Fragments September 2023
* NLNET (https://nlnet.nl)
* Future of Text (https://futureoftext.org)
* visual-meta.info (https://visual-meta.info)
14. Appendix: Definitions
+===============+==============================================+
| definition | explanation |
+===============+==============================================+
| human | a sentient being who thinks fuzzy, absorbs, |
| | and shares thought (by plain text, not |
| | markuplanguage) |
+---------------+----------------------------------------------+
| scene | a (local/remote) 3D scene or 3D file |
| | (index.gltf e.g.) |
+---------------+----------------------------------------------+
| 3D object | an object inside a scene characterized by |
| | vertex-, face- and customproperty data. |
+---------------+----------------------------------------------+
| metadata | custom properties of text, 3D Scene or |
| | Object(nodes), relevant to machines and a |
| | human minority (academics/developers) |
+---------------+----------------------------------------------+
| XR fragment | URI Fragment with spatial hints like |
| | #pos=0,0,0&t=1,100 e.g. |
+---------------+----------------------------------------------+
| src | (HTML-piggybacked) metadata of a 3D object |
| | which instances content |
+---------------+----------------------------------------------+
| href | (HTML-piggybacked) metadata of a 3D object |
| | which links to content |
+---------------+----------------------------------------------+
| query | an URI Fragment-operator which queries |
| | object(s) from a scene like #q=cube |
+---------------+----------------------------------------------+
| visual-meta | visual-meta (https://visual.meta.info) data |
| | appended to text/books/papers which is |
| | indirectly visible/editable in XR. |
+---------------+----------------------------------------------+
| requestless | metadata which never spawns new requests |
| metadata | (unlike RDF/HTML, which can cause framerate- |
| | dropping, hence not used a lot in games) |
+---------------+----------------------------------------------+
| FPS | frames per second in spatial experiences |
| | (games,VR,AR e.g.), should be as high as |
| | possible |
+---------------+----------------------------------------------+
| introspective | inward sensemaking ("I feel this belongs to |
van Kammen Expires 12 March 2024 [Page 18]
Internet-Draft XR Fragments September 2023
| | that") |
+---------------+----------------------------------------------+
| extrospective | outward sensemaking ("I'm fairly sure John |
| | is a person who lives in oklahoma") |
+---------------+----------------------------------------------+
| ◻ | ascii representation of an 3D object/mesh |
+---------------+----------------------------------------------+
| (un)obtrusive | obtrusive: wrapping human text/thought in |
| | XML/HTML/JSON obfuscates human text into a |
| | salad of machine-symbols and words |
+---------------+----------------------------------------------+
| BibTeX | simple tagging/citing/referencing standard |
| | for plaintext |
+---------------+----------------------------------------------+
| BibTag | a BibTeX tag |
+---------------+----------------------------------------------+
| (hashtag)bibs | an easy to speak/type/scan tagging SDL (see |
| | here (https://github.com/coderofsalvation/ |
| | hashtagbibs) |
+---------------+----------------------------------------------+
Table 7
van Kammen Expires 12 March 2024 [Page 19]