<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
This draft is a specification for 4D URI's & [hypermediatic](https://github.com/coderofsalvation/hypermediatic) navigation, which links together space, time & text together, for hypermedia browsers with- or without a network-connection.<br>
XR Fragments allows us to better use existing metadata inside 3D scene(files), by connecting it to proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment).
1. addressibility and [hypermediatic](https://github.com/coderofsalvation/hypermediatic) navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata
Instead of forcing authors to combine 3D/2D objects programmatically (publishing thru a game-editor e.g.), XR Fragments **integrates all** which allows a universal viewing experience.<br>
Fact: our typical browser URL's are just **a possible implementation** of URI's (for untapped humancentric potential of URI's [see interpeer.io](https://interpeer.io))
> XR Fragments does not look at XR (or the web) thru the lens of HTML or URLs.<br>But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective.
XR Fragments itself are [hypermediatic](https://github.com/coderofsalvation/hypermediatic) and HTML-agnostic, though pseudo-XR Fragment browsers **can** be implemented on top of HTML/Javascript.
Pseudo (non-native) browser-implementations (supporting XR Fragments using HTML+JS e.g.) can use the `?` search-operator to address outbound content.<br>
In other words, the URL updates to: `https://me.com?https://me.com/other.glb` when navigating to `https://me.com/other.glb` from inside a `https://me.com` WebXR experience e.g.<br>
That way, if the link gets shared, the XR Fragments implementation at `https://me.com` can load the latter (and still indicates which XR Fragments entrypoint-experience/client was used).
| | x,y,z,xspeed,yspeed,zspeed | 0,0.5,1,0.2,0,2 | XD timeline: play uv-coordinate at `0,0.5` and scroll `x` (=u) `0.2` within each second and pass `1` and `2` as custom data to shader uniforms `za` and `zb` |
> NOTE: XR Fragments are optional but also file- and protocol-agnostic, which means that programmatic 3D scene(nodes) can also use the mechanism/metadata.
| `#<objectname>=<mediafrag>` | string=[media frag](https://www.w3.org/TR/media-frags/#valid-uri) | `#foo=0,1`| play media `src` using [media fragment URI](https://www.w3.org/TR/media-frags/#valid-uri) |
| `#<objectname>=<timevector>` | string=timevector | `#sky=0,0.5,0.1,0`| sets 1D/2D/3D time(line) vectors (uv-position e.g.) to `0,0.5` (and autoscroll x with max `0.1` every second)|
> It also allows **sourceportation**, which basically means the enduser can teleport to the original XR Document of an `src` embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.
1. the Y-coordinate of `pos` identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets).
1. set the position of the camera accordingly to the vector3 values of `#pos`
1.`rot` sets the rotation of the camera (only for non-VR/AR headsets)
1.`t` sets the playbackspeed and animation-range of the current scene animation(s) or `src`-mediacontent (video/audioframes e.g., use `t=0,7,7` to 'STOP' at frame 7 e.g.)
An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the `buttonA` and `buttonB`.<br>
In case of `buttonA` the end-user will be teleported to another location and time in the **current loaded scene**, but `buttonB` will **replace the current scene** with a new one, like `other.fbx`, and assume `pos=0,0,0`.
1. IF a `#cube` matches a custom property-key (of an object) in the 3D file/scene (`#cube`: `#......`) <b>THEN</b> execute that predefined_view.
2. IF scene operators (`pos`) and/or animation operator (`t`) are present in the URL then (re)position the camera and/or animation-range accordingly.
3. IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with `#pos=0,0,0` ([example](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]))
4. IF a `#cube` matches the name (of an object) in the 3D file/scene then draw a line from the enduser('s heart) to that object (to highlight it).
5. IF a `#cube` matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `fishbowl` (and `bass` and `tuna`) will be instanced inside `aquariumcube`.<br>
> Instead of cherrypicking a rootobject `#fishbowl` with `src`, additional filters can be used to include/exclude certain objects. See next chapter on filtering below.
2.<b>local</b>`src` values (`#...` e.g.) starting with a non-negating filter (`#cube` e.g.) will (deep)reparent that object (with name `cube`) as the new root of the scene at position 0,0,0
3.<b>local</b>`src` values should respect (negative) filters (`#-foo&price=>3`)
4. the instanced scene (from a `src` value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an 'empty'-object in blender e.g.). For more info see Chapter Scaling.
5.<b>external</b>`src` values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:
6.`src` values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see [broken links](#links))
7.<b>external</b>`src` values should respect the fallback link mechanism (see [broken links](#broken-links)
8. when the placeholder object is a 2D plane, but the mimetype is 3D, then render the spatial content on that plane via a stencil buffer.
9. src-values are non-recursive: when linking to an external object (`src: foo.fbx#bar`), then `src`-metadata on object `bar` should be ignored.
12. when the enduser clicks an href with `#t=1,0,0` (play) will be applied to all src mediacontent with a timeline (mp4/mp3 e.g.)
13. a non-euclidian portal can be rendered for flat 3D objects (using stencil buffer e.g.) in case ofspatial `src`-values (an object `#world3` or URL `world3.fbx` e.g.).
2. relocation/reorientation should happen locally for local URI's (`#pos=....`)
3. navigation should not happen ''immediately'' when user is more than 2 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)
4. URL navigation should always be reflected in the client (in case of javascript: see [[here](https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js) for an example navigator).
7. In XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [[here](https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29) for an example wearable)
8. in case of navigating to a new [[pos)ition, ''first'' navigate to the ''current position'' so that the ''back-button'' of the ''browser-history'' always refers to the previous position (see [[here](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97))
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br>
> Rule of thumb: visible placeholder objects act as a '3D canvas' for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).
1.<b>IF</b> an embedded property (`src` e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):
* calculate the <b>bounding box</b> of the ''placeholder'' object (maxsize=1.4 e.g.)
* hide the ''placeholder'' object (material e.g.)
* instance the `src` scene as a child of the existing object
* calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)
> REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)
* see [an (outdated) example video here](https://coderofsalvation.github.io/xrfragment.media/queries.mp4) which used a dedicated `q=` variable (now deprecated and usable directly)
> NOTE 1: after an external embedded object has been instanced (`src: https://y.com/bar.fbx#room` e.g.), filters do not affect them anymore (reason: local tag/name collisions can be mitigated easily, but not in case of remote content).
> NOTE 2: depending on the used 3D framework, toggling objects (in)visible should happen by enabling/disableing writing to the colorbuffer (to allow children being still visible while their parents are invisible).
> An example filter-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Filter.hx)
The obvious approach for this, is to consult the XRWG ([example](https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js)), which basically has all these things already collected/organized for you during scene-load.
> The XR Fragments does this by collapsing space into a **Word Graph** (the **XRWG** [example](https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js)), augmented by Bib(s)Tex.
Instead of just throwing together all kinds media types into one experience (games), what about their tagged/semantical relationships?<br>
Perhaps the following question is related: why is HTML adopted less in games outside the browser?
Through the lens of constructive lazy game-developers, ideally metadata must come **with** text, but not **obfuscate** the text, or **spawning another request** to fetch it.<br>
> Why Bib(s)Tex? Because its seems to be the lowest common denominator for an human-curated XRWG (extendable by speech/scanner/writing/typing e.g, see [further motivation here](https://github.com/coderofsalvation/hashtagbibs#bibs--bibtex-combo-lowest-common-denominator-for-linking-data))
1. XR Fragments promotes (de)serializing a scene to the XRWG ([example](https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js))
5. Like Bibs, XR Fragments generalizes the BibTex author/title-semantics (`author{title}`) into **this** points to **that** (`this{that}`)
6. The XRWG should be recalculated when textvalues (in `src`) change
7. HTML/RDF/JSON is still great, but is beyond the XRWG-scope (they fit better in the application-layer)
8. Applications don't have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.
9. The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)
> both `#john@baroque`-bib and BibTex `@baroque{john}` result in the same XRWG, however on top of that 2 tages (`house` and `todo`) are now associated with text/objectname/tag 'baroque'.
> [hashtagbibs](https://github.com/coderofsalvation/hashtagbibs) potentially allow the enduser to annotate text/objects by **speaking/typing/scanning associations**, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: `https://y.io/z.fbx#@baroque@todo` e.g.
9. The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)
10. The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
13. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)).
14. anti-pattern: hardcoupling an XR Browser with a mandatory **markup/scripting-language** which departs from onubtrusive plain text (HTML/VRML/Javascript) (see [the core principle](#core-principle))
> The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail.
* lines beginning with `@` will not be rendered verbatim by default ([read more](https://github.com/coderofsalvation/hashtagbibs#hashtagbib-mimetypes))
* the XRWG should expand bibs to BibTex occurring in text (`#contactjohn@todo@important` e.g.)
> This significantly expands expressiveness and portability of human tagged text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
> additional tagging using [bibs](https://github.com/coderofsalvation/hashtagbibs): to tag spatial object `note_canvas` with 'todo', the enduser can type or speak `#note_canvas@todo`
To prime the XRWG with text from plain text `src`-values, here's an example XR Text (de)multiplexer in javascript (which supports inline bibs & bibtex):
> when an XR browser updates the human text, a quick scan for nonmatching tags (`@book{nonmatchingbook` e.g.) should be performed and prompt the enduser for deleting them.
> due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)
> This would hide all object tagged with `topic`, `courses` or `theme` (including math) so that later only objects tagged with `math` will be visible
This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.
**Q:** Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br>
**A:** Because it's out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru `src` values)
**Q:** Why isn't there support for scripting, while we have things like WASM
**A:** This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragments should never unhyperify itself by hardcoupling to a particular markup or scripting language. [XR Macro's](https://xrfragment.org/doc/RFC_XR_Macros.html) are an example of something which is probably smarter and safer for hypermedia browsers to implement, instead of going full-in with a turing-complete scripting language (and suffer the security consequences later).<br>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br>
Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with hypermedia.<br>This belongs to browser extensions.<br>
Non-HTML Hypermedia browsers should make browser extensions the right place, to 'extend' experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).
|URI | some resource at something somewhere via someprotocol (`http://me.com/foo.glb#foo` or `e76f8efec8efce98e6f` [see interpeer.io](https://interpeer.io))|
|URL | something somewhere via someprotocol (`http://me.com/foo.glb`) |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. |
|requestless metadata | metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
|BibTeX | simple tagging/citing/referencing standard for plaintext |
|BibTag | a BibTeX tag |
|(hashtag)bibs | an easy to speak/type/scan tagging SDL ([see here](https://github.com/coderofsalvation/hashtagbibs) which expands to BibTex/JSON/XML |