<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
This draft is a specification for interactive URI-controllable 3D files, enabling [hypermediatic](https://github.com/coderofsalvation/hypermediatic) navigation, to enable a spatial web for hypermedia browsers with- or without a network-connection.<br>
XR Fragments allows us to better use implicit metadata inside 3D scene(files), by mapping it to proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment).<br>
1. addressibility and [hypermediatic](https://github.com/coderofsalvation/hypermediatic) navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) using src/href spatial metadata
XR Fragments views XR experiences through the lens of 3D deeplinked URI's, rather than thru code(frameworks) or protocol-specific browsers (webbrowser e.g.).
To aid adoption, the standard comprises of various (optional) support-levels, which incorporate existing standards like [W3C Media Fragments](https://www.w3.org/TR/media-frags/) and [URI Templates (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570) to promote spatial addressibility, sharing, navigation, filtering and databinding objects for (XR) Browsers.<br>
> XR Fragments is in a sense, a <b>heuristical 3D format</b> or meta-format, which leverages heuristic rules derived from any 3D scene or well-established 3D file formats, to extract meaningful features from scene hierarchies.<br>
These heuristics, enable features that are both meaningful and consistent across different scene representations, allowing <b>higher interop</b> between fileformats, 3D editors, viewers and game-engines.
`href` metadata traditionally implies **click** AND **navigate**, however XR Fragments adds stateless **click** (`xrf://....`) via the `xrf://` scheme, which does not change the top-level URL-adress (of the browser).
This allows for many extra interactions via URLs, which otherwise needs a scripting language.
> Being able to use the same URI Fragment DSL for navigation (`href: #foo`) as well as interactions (`href: xrf://#foo`) greatly simplifies implementation, increases HFL, and reduces need for scripting languages.
* potentially allowing 3D assets/nodes to publish XR Fragments to themselves/eachother using the `xrf://` hashbus (`xrf://#person=walk` to trigger `walk`-animation for object `person`)
* potentially collapsing the 3D scene to an wordgraph (for essential navigation purposes) controllable thru a hash(tag)bus
XR Fragments itself are [hypermediatic](https://github.com/coderofsalvation/hypermediatic) and HTML-agnostic, though pseudo-XR Fragment browsers **can** be implemented on top of HTML/Javascript.
| | (this does not update topLevel URI) | (this is non-standard, non-hypermediatic) |
> An important aspect of HFL is that URI Fragments can be triggered without updating the top-level URI (default href-behaviour) thru their own 'bus' (`xrf://#.....`). This decoupling between navigation and interaction prevents non-standard things like (`href`:`javascript:dosomething()`).
Pseudo (non-native) browser-implementations (supporting XR Fragments using HTML+JS e.g.) can use the `?` search-operator to address outbound content.<br>
In other words, the URL updates to: `https://me.com?https://me.com/other.glb` when navigating to `https://me.com/other.glb` from inside a `https://me.com` WebXR experience e.g.<br>
That way, if the link gets shared, the XR Fragments implementation at `https://me.com` can load the latter (and still indicates which XR Fragments entrypoint-experience/client was used).
> Every 3D fileformat supports named 3D object, and this name allows URLs (fragments) to reference them (and their children objects).
Clever nested design of 3D scenes allow great ways for re-using content, and/or previewing scenes.<br>
For example, to render a portal with a preview-version of the scene, create an 3D object with:
* href: `https://scene.fbx`
> It also allows **sourceportation**, which basically means the enduser can teleport to the original XR Document of an `src` embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.
> NOTE: Linux's `setfattr/getfattr` is `xattr` on mac, and `Set-Content/Get-content` on Windows. See [pxattr](https://www.lesbonscomptes.com/pxattr/index.html) for lowlevel access.
## JSON sidecar-file
For developers, sidecar-file can allow for defining **explicit** XR Fragments links (>level1), outside of the 3D file.<br>
This can be done via (objectname/metadata) key/value-pairs in a JSON [sidecar-file](https://en.wikipedia.org/wiki/Sidecar_file):
So after loading `experience.glb` the existence of `experience.json` is detected, to apply the explicit metadata.<br>
The sidecar will define (or **override** already existing) extras, which can be handy for multi-user platforms (offer 3D scene customization/personalization to users).
> In THREE.js-code this would boil down to:
```javascript
scene.userData['aria-description'] = "description of scene"
XR Fragments tries to seek to connect the world of text (semantical web / RDF), and the world of pixels.<br>
Instead of forcing authors to combine 3D/2D objects programmatically (publishing thru a game-editor e.g.), XR Fragments **integrates all** which allows a universal viewing experience.<br>
Fact: our typical browser URL's are just **a possible implementation** of URI's (for untapped humancentric potential of URI's [see interpeer.io](https://interpeer.io) or [NextGraph](https://nextgraph.org) )
> XR Fragments does not look at XR (or the web) thru the lens of HTML or URLs.<br>But approaches things from a higherlevel local-first 3D hypermedia browser-perspective.
> ?-linked and #-linked navigation are JUST one possible way to implement XR Fragments: the essential goal is to allow a Hypermediatic FeedbackLoop (HFL) between external and internal 4D navigation.
| `#......` | vector3 | `#room1``#room2``#cam2` | positions/parents camera(rig) (or XR floor) to xyz-coord/object/camera and upvector |
| [Media Fragments](https://www.w3.org/TR/media-frags/) | [media fragment](#media%20fragments%20and%20datatypes) | `#t=0,2&loop` | play (and loop) 3D animation from 0 seconds till 2 seconds|
| level0+1 | hover 3D file [href](#via-href-metadata) | show the preview PNG thumbnail (if any). |
| level0+1 | launch 3D file [href](#via-href-metadata) | replace the current scene with a new 3D file (`href: other.glb` e.g.) |
| level2 | click internal 3D file [href](#via-href-metadata) (`#roomB` e.g.) | teleport the camera to the origin of object(name `roomB`). See [[teleport camera]].|
| level2 | click external 3D file [href](#via-href-metadata) (`foo.glb` e.g.) | replace the current scene with a new 3D file (`href: other.glb` e.g.) |
| level2 | hover external 3D file [href](#via-href-metadata) | show the preview PNG thumbnail (if any sidecar, see level0) |
| level2 | click [href](#via-href-metadata) | hashbus: execute without changing the toplevel URL location (`href: xrf://#someObjectName` e.g.) |
| level3 | click [href](#via-href-metadata) | set the global 3D animation timeline to its Media Fragment value (`#t=2,3` e.g.) |
> NOTE: hashbus links (`xrf://#foo&bar`) don't change the toplevel URL, which makes it ideal for interactions (in contrast to typical `#roomC` navigation, which benefit back/forward browser-buttons), see <a href="#hashbus">hashbus</a> for more info.
> **Examples:** `#+menu` to show a object, `#-menu` to hide a menu, `#!menu` to teleport a menu, `#*block` to clone a grabbable block, `#|object` to share an object
This tiny but powerful symbol allows incredible interactive possibilities, by carefully positioning re-usable objects outside of a scene (below the usercamera's floor e.g.).
> **NOTE**: level2 teleportation links, as well as instancing mitigates the 'broken embedded image'-issue of HTML: **always** attaching the href-values to **a 3D (preview) object** (that way broken links will not break the design).
**Example:** clicking a 3D button with title 'menu' and [href](#href)-value `xrf:menu.glb?instance#t=4,5` would instance a 3D menu (`menu.glb`) in front of the user, and loop its animation between from 4-5 seconds (`t=4,5`)
> This URL can be fed straight into [Web Share API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Share_API) or [xdg-open](https://www.freedesktop.org/wiki/Software/xdg-utils/)
Prefixing the `xrf:` to [href](#href)-values **will prevent** [level2](#📜%20level2:%20explicit%20links) [href](#href)-values from changing the top-Level URL.
1. IF scene operators and/or animation operator (`t`) are present in the URL then (re)position the camera (to `room1`) and/or animation-range (`10`) accordingly.
2. IF no camera-position has been set in <b>step 1 or 2</b> assume `0,0,0` as camera coordinate (XR: add user-height) ([example](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]))
3. IF a camera-object exists with name `cam` assume that user(camera) position
3. navigation should not happen ''immediately'' when user is more than 5 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)
4. URL navigation should always be reflected in the client URL-bar (in case of javascript: see [[here](https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js) for an example navigator), and only update the URL-bar after the scene (default fragment `#`) has been loaded.
5. In immersive XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [[here](https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29) for an example wearable)
6. make sure that the ''back-button'' of the ''browser-history'' always refers to the previous position (see [[here](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97))
7. ignore previous rule in special cases, like clicking an `href` using camera-portal collision (the back-button could cause a teleport-loop if the previous position is too close)
8. href-events should bubble upward the node-tree (from children to ancestors, so that ancestors can also conain an href), however only 1 href can be executed at the same time.
9. the end-user navigator back/forward buttons should repeat a back/forward action until a `#...` primitive is found (the stateless xrf:// href-values should not be pushed to the url-history)
The following metadata are non-normative but encouraged, since they are popular and cheap to parse:
* [RDF/JSON-LD](https://json-ld.org) like [this example](https://mvmd.org/standards/gltf/) or via glTF's [KHR_xmp_json_ld extension](https://github.com/KhronosGroup/glTF/tree/main/extensions/2.0/Khronos/KHR_xmp_json_ld)
> Example: object 'tryceratops' with `aria-description: is a huge dinosaurus standing on a #mountain` generates transcript `#tryceratops is a huge dinosaurus standing on a #mountain`, where the hashtags are clickable XR Fragments (activating the visible-links in the XR browser).
14. The (dynamic) `go abc` command should navigate to `#scene2` in case there's a 3D node with name `abc` and `href` value `#scene2`
15. The `look` command should give an (contextual) 3D-to-text transcript, by scanning the `aria-description` values of the current `#...` (3D object) value (including its children)
16. The `do` command should list all possible `href` values which don't contain an `#...` XR Fragment
17. The (dynamic) `do abc` command should navigate/execute `https://.../...` in case a 3D node exist with name `abc` and `href` value `https://.../...`
## Two-button navigation
For specific user-profiles, gyroscope/mouse/keyboard/audio/visuals will not be available.<br>
Therefore a 2-button navigation-interface is the bare minimum interface:
1. objects with href metadata can be cycled via a key (tab on a keyboard)
2. objects with href metadata can be activated via a key (enter on a keyboard)
3. the TTS reads the href-value (and/or aria-description if available)
If a glTF implementation does not support a particular extension, the (XRF) extras field can serve as a fallback. This way, metadata provided in extras can still be useful for applications that don't handle certain extensions.
> **Example 1** In case of the OMI_LINK glTF extension (`href: https://nlnet.nl`) and an XR Fragment (`href: #otherroom` or `href: otherplanet.glb`), it is clear that `https://nlnet.nl` should open in a browsertab, whereas the XR Fragment links should teleport the user. If the OMI_LINK contains an XR Fragment (`#room1` e.g.) a teleport should be performed only (and other [overlapping] metadata should be ignored).
> **Example 2** If an Extensions uses XR Fragments in URI's (`href: #otherroom` or `href: xrf://-walls` in OMI_LINK e.g.), then perform them according to XR Fragment spec (teleport user). But only once: ignore further overlapping metadata for that usecase.
Vendor-specific metadata in a 3D scenefiles, are similar to vendor-specific [CSS-prefixes](https://en.wikipedia.org/wiki/CSS#Vendor_prefixes) (`-moz-opacity: 0.2` e.g.).
This allows popular 3D engines/frameworks, to initialize specific features when loading a scene/object, in a progressive enhanced way.
Vendor Prefixes allows embedding 3D engines/framework-specific features a 3D file via metadata:
> Why? Because not all XR interactions can/should be solved/standardized by embedding XR Fragments into any 3D file.
The lowest common denominator between 3D engines is the 'entity'-part of their entity-component-system (ECS). The 'component'-part can be progressively enhanced via vendor prefixes.
String-templatevalues are evaluated as per [URI Templates (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570) Level 1.
> This 'separating of mechanism from policy' (unix rule) does **somewhat** break portability of an XR experience, but still prevents (E-waste of) handcoded virtual worlds. It allows for (XR experience) metadata to survive in future 3D engines and scene-fileformats.
The only dynamic parts are [W3C Media Fragments](https://www.w3.org/TR/media-frags/) and [URI Templates (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570).<br>
The use of URI Templates is limited to pre-defined variables and Level0 fragments-expansion only, which makes it quite safe.<br>
**Q:** Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br>
**A:** Because it's out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru `src` values)
**Q:** Why isn't there support for scripting, URI Template Fragments are so limited compared to WASM & javascript
**A:** This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragment uses [W3C Media Fragments](https://www.w3.org/TR/media-frags/) and [URI Templates (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570), to prevent unhyperifying itself by hardcoupling to a particular markup or scripting language. <br>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br>
Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with XR Fragments or hypermedia.<br>This perhaps belongs more to browser extensions.<br>
Non-HTML Hypermedia browsers should make browser extensions the right place, to 'extend' experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).
|URI | some resource at something somewhere via someprotocol (`http://me.com/foo.glb#foo` or `e76f8efec8efce98e6f` [see interpeer.io](https://interpeer.io))|
|URL | something somewhere via someprotocol (`http://me.com/foo.glb`) |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. |
|requestless metadata | metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
|BibTeX | simple tagging/citing/referencing standard for plaintext |
|BibTag | a BibTeX tag |
|(hashtag)bibs | an easy to speak/type/scan tagging SDL ([see here](https://github.com/coderofsalvation/hashtagbibs) which expands to BibTex/JSON/XML |