diff --git a/doc/RFC.md b/doc/RFC.md
deleted file mode 100644
index d1500d5..0000000
--- a/doc/RFC.md
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-
-
-> version 1.0.0
-
-date: 2023-04-27T22:44:39+0200
-[![Actions Status](https://github.com/coderofsalvation/xrfragment/workflows/test/badge.svg)](https://github.com/coderofsalvation/xrfragment/actions)
-
-# XRFragment Grammar
-
-```
- reserved = gen-delims / sub-delims
- gen-delims = "#" / "&"
- sub-delims = "," / "|" / "="
-```
-
-
-> Example: `://foo.com/my3d.asset#pos=1,0,0&prio=-5&t=0,100|100,200`
-
-
-
-| Explanation | |
-|-|-|
-| `x=1,2,3` | vector/coordinate argument e.g. |
-| `x=foo\|bar|1,2,3|1.0` | the `\|` character is used for:
1.specifying `n` arguments for xrfragment `x`
2. roundrobin of values (in case provided arguments exceeds `n` of `x` for #1) when triggered by browser URI (clicking `href` e.g.)|
-| `https://x.co/1.gltf||xyz://x.co/1.gltf` | multi-protocol/fallback urls |
-| `.mygroup` | query-alias for `class:mygroup` |
-
-> Focus: hasslefree 3D vector-data (`,`), multi-protocol/fallback-linking & dynamic values (`|`), and CSS-piggybacking (`.mygroup`)
-
-# URI parser
-> icanhazcode? yes, see [URI.hx](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/URI.hx)
-
-1. fragment URI starts with `#`
-1. fragments are split by `&`
-1. store key/values into a associative array or dynamic object
-1. loop thru each fragment
-1. for each fragment split on `=` to separate key/values
-1. fragment-values are urlencoded (space becomes `+` using `encodeUriComponent` e.g.)
-1. every recognized fragment key/value-pair is added to a central map/associative array/object
-
-# XR Fragments parser
-
-> icanhazcode? yes, see [Parser.hx](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Parser.hx)
-the gist of it:
-
-1. check if param exist
-
diff --git a/doc/RFC_XR_Fragments.html b/doc/RFC_XR_Fragments.html
index 1de03ae..fcd92d2 100644
--- a/doc/RFC_XR_Fragments.html
+++ b/doc/RFC_XR_Fragments.html
@@ -14,7 +14,6 @@
body{
font-family: monospace;
max-width: 900px;
- text-align: justify;
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
@@ -23,6 +22,22 @@
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
+ a,a:visited,a:active{ color: #70f; }
+ code{
+ border: 1px solid #AAA;
+ border-radius: 3px;
+ padding: 0px 5px 2px 5px;
+ }
+ pre>code{
+ border:none;
+ border-radius:0px;
+ padding:0;
+ }
+ blockquote{
+ padding-left: 30px;
+ margin: 0;
+ border-left: 5px solid #CCC;
+ }
@@ -52,7 +67,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
How can we add more features to existing text & 3D scenes, without introducing new dataformats? Historically, there’s many attempts to create the ultimate markuplanguage or 3D fileformat. -However, thru the lens of authorina,g their lowest common denominator is still: plain text. +However, thru the lens of authoring their lowest common denominator is still: plain text. XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:
#q=cube
){::boilerplate bcp14-tagged}
@@ -78,13 +94,17 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of existHere’s an ascii representation of a 3D scene-graph which contains 3D objects (◻
) and their metadata:
index.gltf
- │
- ├── ◻ buttonA
- │ └ href: #pos=1,0,1&t=100,200
- │
- └── ◻ buttonB
- └ href: other.fbx
+ +--------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ buttonA |
+ | │ └ href: #pos=1,0,1&t=100,200 |
+ | │ |
+ | └── ◻ buttonB |
+ | └ href: other.fbx |
+ | |
+ +--------------------------------------------------------+
@@ -94,37 +114,149 @@ In case of buttonA
the end-user will be teleported to another locat
Navigating text
-TODO
+Text in XR has to be unobtrusive, for readers as well as authors.
+We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched afterwards (lazy metadata).
+Therefore, XR Fragment-compliant text will just be plain text, and not yet-another-markuplanguage.
+In contrast to markup languages, this means humans need to be always served first, and machines later.
+
+
+Basically, a direct feedbackloop between unobtrusive text and human eye.
+
+
+Reality has shown that outsourcing rich textmanipulation to commercial formats or mono-markup browsers (HTML) have there usecases, but
+also introduce barriers to thought-translation (which uses simple words).
+As Marshall MCluhan said: we have become irrevocably involved with, and responsible for, each other.
+
+In order enjoy hasslefree batteries-included programmable text (glossaries, flexible views, drag-drop e.g.), XR Fragment supports
+visual-meta(data).
+
+Default Data URI mimetype
+
+The XR Fragment specification bumps the traditional default browser-mimetype
+
+text/plain;charset=US-ASCII
+
+into:
+
+text/plain;charset=utf-8;visual-meta=1
+
+This means that visual-meta(data) can be appended to plain text without being displayed.
+
+URL and Data URI
+
+ +--------------------------------------------------------------+ +------------------------+
+ | | | author.com/article.txt |
+ | index.gltf | +------------------------+
+ | │ | | |
+ | ├── ◻ article_canvas | | Hello friends. |
+ | │ └ src: ://author.com/article.txt | | |
+ | │ | | @{visual-meta-start} |
+ | └── ◻ note_canvas | | ... |
+ | └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
+ | |
+ | |
+ +--------------------------------------------------------------+
+
+
+The difference is that text (+visual-meta data) in Data URI is saved into the scene, which also promotes rich copy-paste.
+In both cases will the text get rendered immediately (onto a plane geometry, hence the name ‘_canvas’).
+The enduser can access visual-meta(data)-fields only after interacting with the object.
+
+
+NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru src
-metadata, it is just that text/plain;charset=utf-8;visual-meta=1
is the minimum requirement.
+
+
+omnidirectional XR annotations
+
+ +---------------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ todo |
+ | │ └ src:`data:learn about ARC @{visual-meta-start}...`|
+ | │ |
+ | └── ◻ ARC |
+ | └── ◻ plane |
+ | └ src: `data:ARC was revolutionary |
+ | @{visual-meta-start} |
+ | @{glossary-start} |
+ | @entry{ |
+ | name = {ARC}, |
+ | description = {Engelbart Concept: |
+ | Augmentation Research Center, |
+ | The name of Doug's lab at SRI. |
+ | }, |
+ | }` |
+ | |
+ +---------------------------------------------------------------+
+
+
+Here we can see an 3D object of ARC, to which the enduser added a textnote (basically a plane geometry with src
).
+The enduser can view/edit visual-meta(data)-fields only after interacting with the object.
+This allows the 3D scene to perform omnidirectional features for free, by omni-connecting the word ‘ARC’:
+
+
+- the ARC object can draw a line to the ‘ARC was revolutionary’-note
+- the ‘ARC was revolutionary’-note can draw line to the ‘learn about ARC’-note
+- the ‘learn about ARC’-note can draw a line to the ARC 3D object
+
+
+HYPER copy/paste
+
+The previous example, offers something exciting compared to simple textual copy-paste.
+, XR Fragment offers 4D- and HYPER- copy/paste: time, space and text interlinked.
+Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:
+
+
+- copy ARC 3D object (incl. animation) & paste elsewhere including visual-meta(data)
+- select the word ARC in any text, and paste a bundle of anything ARC-related
+
+
+Plain Text (with optional visual-meta)
+
+In contrast to markuplanguage, the (dictated/written) text needs no parsing, stays intact, by postponing metadata to the appendix.
+
+This allows for a very economic XR way to:
+
+
+- directly write, dictate, render text (=fast, without markup-parser-overhead)
+- add/load metadata later (if provided)
+- enduser interactions with text (annotations,mutations) can be reflected back into the visual-meta(data) Data URI
+- copy/pasting of text will automatically cite the (mutated) source
+- allows annotating 3D objects as if they were textual representations (convert 3D document to text)
+
+
+
+NOTE: visualmeta never breaks the original intended text (in contrast to forgetting a html closing-tag e.g.)
+
Embedding 3D content
Here’s an ascii representation of a 3D scene-graph with 3D objects (◻
) which embeds remote & local 3D objects (◻
) (without) using queries:
- +------------------------------------------------------------+ +---------------------------+
- | | | |
- | index.gltf | | rescue.com/aquarium.gltf |
- | │ | | │ |
- | ├── ◻ canvas | | └── ◻ fishbowl |
- | │ └ src: painting.png | | ├─ ◻ bassfish |
- | │ | | └─ ◻ tuna |
- | ├── ◻ aquariumcube | | |
- | │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
- | │ |
- | ├── ◻ bedroom |
- | │ └ src: #q=canvas |
- | │ |
- | └── ◻ livingroom |
- | └ src: #q=canvas |
- | |
- +------------------------------------------------------------+
+ +--------------------------------------------------------+ +-------------------------+
+ | | | |
+ | index.gltf | | ocean.com/aquarium.fbx |
+ | │ | | │ |
+ | ├── ◻ canvas | | └── ◻ fishbowl |
+ | │ └ src: painting.png | | ├─ ◻ bass |
+ | │ | | └─ ◻ tuna |
+ | ├── ◻ aquariumcube | | |
+ | │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
+ | │ |
+ | ├── ◻ bedroom |
+ | │ └ src: #q=canvas |
+ | │ |
+ | └── ◻ livingroom |
+ | └ src: #q=canvas |
+ | |
+ +--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects painting.png
onto the (plane) object called canvas
(which is copy-instanced in the bed and livingroom).
-Also, after lazy-loading rescue.com/aquarium.gltf
, only the queried objects bassfish
and tuna
will be instanced inside aquariumcube
.
+Also, after lazy-loading ocean.com/aquarium.gltf
, only the queried objects bass
and tuna
will be instanced inside aquariumcube
.
Resizing will be happen accordingly to its placeholder object (aquariumcube
), see chapter Scaling.
-Embedding text
-
List of XR URI Fragments
Security Considerations
diff --git a/doc/RFC_XR_Fragments.md b/doc/RFC_XR_Fragments.md
index 35c467b..2147f1f 100644
--- a/doc/RFC_XR_Fragments.md
+++ b/doc/RFC_XR_Fragments.md
@@ -26,7 +26,6 @@ fullname="L.R. van Kammen"
body{
font-family: monospace;
max-width: 900px;
- text-align: justify;
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
@@ -35,6 +34,22 @@ fullname="L.R. van Kammen"
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
+ a,a:visited,a:active{ color: #70f; }
+ code{
+ border: 1px solid #AAA;
+ border-radius: 3px;
+ padding: 0px 5px 2px 5px;
+ }
+ pre>code{
+ border:none;
+ border-radius:0px;
+ padding:0;
+ }
+ blockquote{
+ padding-left: 30px;
+ margin: 0;
+ border-left: 5px solid #CCC;
+ }
@@ -61,8 +76,6 @@ This draft offers a specification for 4D URLs & navigation, to link 3D scenes an
The specification promotes spatial addressibility, sharing, navigation, query-ing and interactive text across for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) & [visual-meta](https://visual-meta.info).
-{mainmatter}
-
# Introduction
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
@@ -71,7 +84,7 @@ However, thru the lens of authoring their lowest common denominator is still: pl
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:
* addressibility & navigation of 3D objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + (src/href) metadata
-* addressibility & navigation of text objects: [visual-meta](https://visual-meta.info)
+* bi-directional links between text and spatial objects: [visual-meta](https://visual-meta.info)
# Conventions and Definitions
@@ -82,21 +95,43 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
* src: a (HTML-piggybacked) metadata-attribute of a 3D object which instances content
* href: a (HTML-piggybacked) metadata-attribute of a 3D object which links to content
* query: an URI Fragment-operator which queries object(s) from a scene (`#q=cube`)
+* [visual-meta](https://visual.meta.info): metadata appended to text which is only indirectly visible/editable in XR.
{::boilerplate bcp14-tagged}
+# List of URI Fragments
+
+| fragment | type | example | info |
+|--------------|----------|---------------|------------------------------------------------------|
+| #pos | vector3 | #pos=0.5,0,0 | positions camera to xyz-coord 0.5,0,0 |
+| #rot | vector3 | #rot=0,90,0 | rotates camera to xyz-coord 0.5,0,0 |
+| #t | vector2 | #t=500,1000 | sets animation-loop range between frame 500 and 1000 |
+
+# List of metadata for 3D nodes
+
+| key | type | example | info |
+|--------------|----------|-----------------|--------------------------------------------------------|
+| name | string | name: "cube" | already available in all 3D fileformats & scenes |
+| class | string | class: "cubes" | supported through custom property in 3D fileformats |
+| href | string | href: "b.gltf" | supported through custom property in 3D fileformats |
+| src | string | src: "#q=cube" | supported through custom property in 3D fileformats |
+
# Navigating 3D
Here's an ascii representation of a 3D scene-graph which contains 3D objects (`◻`) and their metadata:
```
- index.gltf
- │
- ├── ◻ buttonA
- │ └ href: #pos=1,0,1&t=100,200
- │
- └── ◻ buttonB
- └ href: other.fbx
+ +--------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ buttonA |
+ | │ └ href: #pos=1,0,1&t=100,200 |
+ | │ |
+ | └── ◻ buttonB |
+ | └ href: other.fbx |
+ | |
+ +--------------------------------------------------------+
```
@@ -104,40 +139,184 @@ An XR Fragment-compatible browser viewing this scene, allows the end-user to int
In case of `buttonA` the end-user will be teleported to another location and time in the **current loaded scene**, but `buttonB` will
**replace the current scene** with a new one (`other.fbx`).
-# Navigating text
-
-TODO
-
# Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects (`◻`) which embeds remote & local 3D objects (`◻`) (without) using queries:
```
- +------------------------------------------------------------+ +---------------------------+
- | | | |
- | index.gltf | | rescue.com/aquarium.gltf |
- | │ | | │ |
- | ├── ◻ canvas | | └── ◻ fishbowl |
- | │ └ src: painting.png | | ├─ ◻ bassfish |
- | │ | | └─ ◻ tuna |
- | ├── ◻ aquariumcube | | |
- | │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
- | │ |
- | ├── ◻ bedroom |
- | │ └ src: #q=canvas |
- | │ |
- | └── ◻ livingroom |
- | └ src: #q=canvas |
- | |
- +------------------------------------------------------------+
+ +--------------------------------------------------------+ +-------------------------+
+ | | | |
+ | index.gltf | | ocean.com/aquarium.fbx |
+ | │ | | │ |
+ | ├── ◻ canvas | | └── ◻ fishbowl |
+ | │ └ src: painting.png | | ├─ ◻ bass |
+ | │ | | └─ ◻ tuna |
+ | ├── ◻ aquariumcube | | |
+ | │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
+ | │ |
+ | ├── ◻ bedroom |
+ | │ └ src: #q=canvas |
+ | │ |
+ | └── ◻ livingroom |
+ | └ src: #q=canvas |
+ | |
+ +--------------------------------------------------------+
```
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).
-Also, after lazy-loading `rescue.com/aquarium.gltf`, only the queried objects `bassfish` and `tuna` will be instanced inside `aquariumcube`.
+Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `bass` and `tuna` will be instanced inside `aquariumcube`.
Resizing will be happen accordingly to its placeholder object (`aquariumcube`), see chapter Scaling.
+
# Embedding text
+Text in XR has to be unobtrusive, for readers as well as authors.
+We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched *afterwards* (lazy metadata).
+Therefore, **yet-another-markuplanguage** is not going to get us very far.
+What will get us far, is when XR interfaces always guarantee direct feedbackloops between plainttext and humans.
+Humans need to be always served first, and machines later.
+
+In the next chapter you can see how XR Fragments enjoys hasslefree rich text, by adding [visual-meta](https://visual.meta.info)(data) support to plain text.
+
+## Default Data URI mimetype
+
+The XR Fragment specification bumps the traditional default browser-mimetype
+
+`text/plain;charset=US-ASCII`
+
+to:
+
+`text/plain;charset=utf-8;visual-meta=1`
+
+This means that [visual-meta](https://visual.meta.info)(data) can be appended to plain text without being displayed.
+
+### URL and Data URI
+
+```
+ +--------------------------------------------------------------+ +------------------------+
+ | | | author.com/article.txt |
+ | index.gltf | +------------------------+
+ | │ | | |
+ | ├── ◻ article_canvas | | Hello friends. |
+ | │ └ src: ://author.com/article.txt | | |
+ | │ | | @{visual-meta-start} |
+ | └── ◻ note_canvas | | ... |
+ | └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
+ | |
+ | |
+ +--------------------------------------------------------------+
+```
+
+The enduser will only see `welcome human` rendered spatially.
+The beauty is that text (AND visual-meta) in Data URI is saved into the scene, which also promotes rich copy-paste.
+In both cases will the text get rendered immediately (onto a plane geometry, hence the name '_canvas').
+The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).
+
+> NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru `src`, it is just that `text/plain;charset=utf-8;visual-meta=1` is the default.
+
+The mapping between 3D objects and text (src-data) is simple:
+
+Example:
+
+```
+ +------------------------------------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ AI |
+ | │ └ class: tech |
+ | │ |
+ | └ src:`data:@{visual-meta-start} |
+ | @{glossary-start} |
+ | @entry{ |
+ | name="AI", |
+ | alt-name1 = "Artificial Intelligence", |
+ | description="Artificial intelligence", |
+ | url = "https://en.wikipedia.org/wiki/Artificial_intelligence", |
+ | } |
+ | @entry{ |
+ | name="tech" |
+ | alt-name1="technology" |
+ | description="when monkeys start to play with things" |
+ | }` |
+ +------------------------------------------------------------------------------------+
+```
+
+Attaching visualmeta as `src` metadata to the (root) scene-node hints the XR Fragment browser.
+3D object names and classes map to `name` of visual-meta glossary-entries.
+This allows rich interaction and interlinking between text and 3D objects:
+
+1. When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.
+2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.
+
+# HYPER copy/paste
+
+The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.
+XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
+Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:
+
+* time/space: 3D object (current animation-loop)
+* text: Text object (including visual-meta if any)
+* interlinked: Collected objects by visual-meta tag
+
+# XR Fragment queries
+
+Include, exclude, hide/shows objects using space-separated strings:
+
+* `#q=cube`
+* `#q=cube -ball_inside_cube`
+* `#q=* -sky`
+* `#q=-.language .english`
+* `#q=cube&rot=0,90,0`
+* `#q=price:>2 price:<5`
+
+It's simple but powerful syntax which allows css-like class/id-selectors with a searchengine prompt-style feeling:
+
+1. queries are only executed when embedded in the asset/scene (thru `src`). This is to prevent sharing of scene-tampered URL's.
+2. search words are matched against 3D object names or metadata-key(values)
+3. `#` equals `#q=*`
+4. words starting with `.` (`.language`) indicate class-properties
+
+> *(*For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`.
+
+* see [an example video here](https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
+
+### including/excluding
+
+|''operator'' | ''info'' |
+|`*` | select all objects (only allowed in `src` custom property) in the current scene (after the default [[predefined_view|predefined_view]] `#` was executed)|
+|`-` | removes/hides object(s) |
+|`:` | indicates an object-embedded custom property key/value |
+|`.` | alias for `class:` (`.foo` equals `class:foo` |
+|`>` `<`| compare float or int number|
+|`/` | reference to root-scene.
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])
`#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)
`#q=-cube` hides both object `cube` in the root-scene AND nested `skybox` objects |
+
+[» example implementation](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js)
+[» example 3D asset](https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192)
+[» discussion](https://github.com/coderofsalvation/xrfragment/issues/3)
+
+## Query Parser
+
+Here's how to write a query parser:
+
+1. create an associative array/object to store query-arguments as objects
+1. detect object id's & properties `foo:1` and `foo` (reference regex: `/^.*:[><=!]?/` )
+1. detect excluders like `-foo`,`-foo:1`,`-.foo`,`-/foo` (reference regex: `/^-/` )
+1. detect root selectors like `/foo` (reference regex: `/^[-]?\//` )
+1. detect class selectors like `.foo` (reference regex: `/^[-]?class$/` )
+1. detect number values like `foo:1` (reference regex: `/^[0-9\.]+$/` )
+1. expand aliases like `.foo` into `class:foo`
+1. for every query token split string on `:`
+1. create an empty array `rules`
+1. then strip key-operator: convert "-foo" into "foo"
+1. add operator and value to rule-array
+1. therefore we we set `id` to `true` or `false` (false=excluder `-`)
+1. and we set `root` to `true` or `false` (true=`/` root selector is present)
+1. we convert key '/foo' into 'foo'
+1. finally we add the key/value to the store (`store.foo = {id:false,root:true}` e.g.)
+
+> An example query-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx)
+
# List of XR URI Fragments
# Security Considerations
diff --git a/doc/RFC_XR_Fragments.txt b/doc/RFC_XR_Fragments.txt
index 981b38c..986146d 100644
--- a/doc/RFC_XR_Fragments.txt
+++ b/doc/RFC_XR_Fragments.txt
@@ -3,9 +3,9 @@
Internet Engineering Task Force L.R. van Kammen
-Internet-Draft 31 August 2023
+Internet-Draft 1 September 2023
Intended status: Informational
-Expires: 3 March 2024
+Expires: 4 March 2024
XR Fragments
@@ -37,7 +37,7 @@ Status of This Memo
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
- This Internet-Draft will expire on 3 March 2024.
+ This Internet-Draft will expire on 4 March 2024.
Copyright Notice
@@ -53,9 +53,9 @@ Copyright Notice
-van Kammen Expires 3 March 2024 [Page 1]
+van Kammen Expires 4 March 2024 [Page 1]
-Internet-Draft XR Fragments August 2023
+Internet-Draft XR Fragments September 2023
This document is subject to BCP 78 and the IETF Trust's Legal
@@ -73,21 +73,25 @@ Table of Contents
2. Conventions and Definitions . . . . . . . . . . . . . . . . . 2
3. Navigating 3D . . . . . . . . . . . . . . . . . . . . . . . . 3
4. Navigating text . . . . . . . . . . . . . . . . . . . . . . . 3
- 5. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 3
- 6. Embedding text . . . . . . . . . . . . . . . . . . . . . . . 4
- 7. List of XR URI Fragments . . . . . . . . . . . . . . . . . . 4
- 8. Security Considerations . . . . . . . . . . . . . . . . . . . 4
- 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 4
- 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 4
+ 4.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 4
+ 4.1.1. URL and Data URI . . . . . . . . . . . . . . . . . . 4
+ 4.2. omnidirectional XR annotations . . . . . . . . . . . . . 5
+ 5. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 5
+ 5.1. Plain Text (with optional visual-meta) . . . . . . . . . 6
+ 6. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 6
+ 7. List of XR URI Fragments . . . . . . . . . . . . . . . . . . 7
+ 8. Security Considerations . . . . . . . . . . . . . . . . . . . 7
+ 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 7
+ 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 7
1. Introduction
How can we add more features to existing text & 3D scenes, without
introducing new dataformats? Historically, there's many attempts to
create the ultimate markuplanguage or 3D fileformat. However, thru
- the lens of authorina,g their lowest common denominator is still:
- plain text. XR Fragments allows us to enrich existing dataformats,
- by recursive use of existing technologies:
+ the lens of authoring their lowest common denominator is still: plain
+ text. XR Fragments allows us to enrich existing dataformats, by
+ recursive use of existing technologies:
* addressibility & navigation of 3D objects: URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) + (src/href) metadata
@@ -102,20 +106,22 @@ Table of Contents
* metadata: custom properties defined in 3D Scene or Object(nodes)
* XR fragment: URI Fragment with spatial hints (#pos=0,0,0&t=1,100
e.g.)
+
+
+
+van Kammen Expires 4 March 2024 [Page 2]
+
+Internet-Draft XR Fragments September 2023
+
+
* src: a (HTML-piggybacked) metadata-attribute of a 3D object which
instances content
* href: a (HTML-piggybacked) metadata-attribute of a 3D object which
links to content
-
-
-
-van Kammen Expires 3 March 2024 [Page 2]
-
-Internet-Draft XR Fragments August 2023
-
-
* query: an URI Fragment-operator which queries object(s) from a
scene (#q=cube)
+ * visual-meta (https://visual.meta.info): metadata appended to text
+ which is only indirectly visible/editable in XR.
{::boilerplate bcp14-tagged}
@@ -124,13 +130,17 @@ Internet-Draft XR Fragments August 2023
Here's an ascii representation of a 3D scene-graph which contains 3D
objects (◻) and their metadata:
- index.gltf
- │
- ├── ◻ buttonA
- │ └ href: #pos=1,0,1&t=100,200
- │
- └── ◻ buttonB
- └ href: other.fbx
+ +--------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ buttonA |
+ | │ └ href: #pos=1,0,1&t=100,200 |
+ | │ |
+ | └── ◻ buttonB |
+ | └ href: other.fbx |
+ | |
+ +--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, allows the end-
user to interact with the buttonA and buttonB. In case of buttonA
@@ -140,14 +150,180 @@ Internet-Draft XR Fragments August 2023
4. Navigating text
- TODO
+ Text in XR has to be unobtrusive, for readers as well as authors. We
+ think and speak in simple text, and given the new paradigm of XR
+ interfaces, logically (spoken) text must be enriched _afterwards_
+ (lazy metadata). Therefore, XR Fragment-compliant text will just be
+ plain text, and *not yet-another-markuplanguage*. In contrast to
+ markup languages, this means humans need to be always served first,
+ and machines later.
-5. Embedding 3D content
+ | Basically, a direct feedbackloop between unobtrusive text and
+ | human eye.
+
+
+
+
+
+van Kammen Expires 4 March 2024 [Page 3]
+
+Internet-Draft XR Fragments September 2023
+
+
+ Reality has shown that outsourcing rich textmanipulation to
+ commercial formats or mono-markup browsers (HTML) have there
+ usecases, but also introduce barriers to thought-translation (which
+ uses simple words). As Marshall MCluhan said: we have become
+ irrevocably involved with, and responsible for, each other.
+
+ In order enjoy hasslefree batteries-included programmable text
+ (glossaries, flexible views, drag-drop e.g.), XR Fragment supports
+ visual-meta (https://visual.meta.info)(data).
+
+4.1. Default Data URI mimetype
+
+ The XR Fragment specification bumps the traditional default browser-
+ mimetype
+
+ text/plain;charset=US-ASCII
+
+ into:
+
+ text/plain;charset=utf-8;visual-meta=1
+
+ This means that visual-meta (https://visual.meta.info)(data) can be
+ appended to plain text without being displayed.
+
+4.1.1. URL and Data URI
+
+ +--------------------------------------------------------------+ +------------------------+
+ | | | author.com/article.txt |
+ | index.gltf | +------------------------+
+ | │ | | |
+ | ├── ◻ article_canvas | | Hello friends. |
+ | │ └ src: ://author.com/article.txt | | |
+ | │ | | @{visual-meta-start} |
+ | └── ◻ note_canvas | | ... |
+ | └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
+ | |
+ | |
+ +--------------------------------------------------------------+
+
+ The difference is that text (+visual-meta data) in Data URI is saved
+ into the scene, which also promotes rich copy-paste. In both cases
+ will the text get rendered immediately (onto a plane geometry, hence
+ the name '_canvas'). The enduser can access visual-meta(data)-fields
+ only after interacting with the object.
+
+ | NOTE: this is not to say that XR Browsers should not load
+ | HTML/PDF/etc-URLs thru src-metadata, it is just that text/
+ | plain;charset=utf-8;visual-meta=1 is the minimum requirement.
+
+
+
+van Kammen Expires 4 March 2024 [Page 4]
+
+Internet-Draft XR Fragments September 2023
+
+
+4.2. omnidirectional XR annotations
+
+ +---------------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ todo |
+ | │ └ src:`data:learn about ARC @{visual-meta-start}...`|
+ | │ |
+ | └── ◻ ARC |
+ | └── ◻ plane |
+ | └ src: `data:ARC was revolutionary |
+ | @{visual-meta-start} |
+ | @{glossary-start} |
+ | @entry{ |
+ | name = {ARC}, |
+ | description = {Engelbart Concept: |
+ | Augmentation Research Center, |
+ | The name of Doug's lab at SRI. |
+ | }, |
+ | }` |
+ | |
+ +---------------------------------------------------------------+
+
+ Here we can see an 3D object of ARC, to which the enduser added a
+ textnote (basically a plane geometry with src). The enduser can
+ view/edit visual-meta(data)-fields only after interacting with the
+ object. This allows the 3D scene to perform omnidirectional features
+ for free, by omni-connecting the word 'ARC':
+
+ * the ARC object can draw a line to the 'ARC was revolutionary'-note
+ * the 'ARC was revolutionary'-note can draw line to the 'learn about
+ ARC'-note
+ * the 'learn about ARC'-note can draw a line to the ARC 3D object
+
+5. HYPER copy/paste
+
+ The previous example, offers something exciting compared to simple
+ textual copy-paste. , XR Fragment offers 4D- and HYPER- copy/paste:
+ time, space and text interlinked. Therefore, the enduser in an XR
+ Fragment-compatible browser can copy/paste/share data in these ways:
+
+ * copy ARC 3D object (incl. animation) & paste elsewhere including
+ visual-meta(data)
+ * select the word ARC in any text, and paste a bundle of anything
+ ARC-related
+
+
+
+
+
+van Kammen Expires 4 March 2024 [Page 5]
+
+Internet-Draft XR Fragments September 2023
+
+
+5.1. Plain Text (with optional visual-meta)
+
+ In contrast to markuplanguage, the (dictated/written) text needs no
+ parsing, stays intact, by postponing metadata to the appendix.
+
+ This allows for a very economic XR way to:
+
+ * directly write, dictate, render text (=fast, without markup-
+ parser-overhead)
+ * add/load metadata later (if provided)
+ * enduser interactions with text (annotations,mutations) can be
+ reflected back into the visual-meta(data) Data URI
+ * copy/pasting of text will automatically cite the (mutated) source
+ * allows annotating 3D objects as if they were textual
+ representations (convert 3D document to text)
+
+ | NOTE: visualmeta never breaks the original intended text (in
+ | contrast to forgetting a html closing-tag e.g.)
+
+6. Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects
(◻) which embeds remote & local 3D objects (◻) (without)
using queries:
+ +--------------------------------------------------------+ +-------------------------+
+ | | | |
+ | index.gltf | | ocean.com/aquarium.fbx |
+ | │ | | │ |
+ | ├── ◻ canvas | | └── ◻ fishbowl |
+ | │ └ src: painting.png | | ├─ ◻ bass |
+ | │ | | └─ ◻ tuna |
+ | ├── ◻ aquariumcube | | |
+ | │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
+ | │ |
+ | ├── ◻ bedroom |
+ | │ └ src: #q=canvas |
+ | │ |
+ | └── ◻ livingroom |
+ | └ src: #q=canvas |
+ | |
+ +--------------------------------------------------------+
@@ -157,47 +333,19 @@ Internet-Draft XR Fragments August 2023
-
-
-
-
-
-
-
-
-van Kammen Expires 3 March 2024 [Page 3]
+van Kammen Expires 4 March 2024 [Page 6]
-Internet-Draft XR Fragments August 2023
+Internet-Draft XR Fragments September 2023
- +------------------------------------------------------------+ +---------------------------+
- | | | |
- | index.gltf | | rescue.com/aquarium.gltf |
- | │ | | │ |
- | ├── ◻ canvas | | └── ◻ fishbowl |
- | │ └ src: painting.png | | ├─ ◻ bassfish |
- | │ | | └─ ◻ tuna |
- | ├── ◻ aquariumcube | | |
- | │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
- | │ |
- | ├── ◻ bedroom |
- | │ └ src: #q=canvas |
- | │ |
- | └── ◻ livingroom |
- | └ src: #q=canvas |
- | |
- +------------------------------------------------------------+
-
An XR Fragment-compatible browser viewing this scene, lazy-loads and
projects painting.png onto the (plane) object called canvas (which is
copy-instanced in the bed and livingroom). Also, after lazy-loading
- rescue.com/aquarium.gltf, only the queried objects bassfish and tuna
- will be instanced inside aquariumcube. Resizing will be happen
+ ocean.com/aquarium.gltf, only the queried objects bass and tuna will
+ be instanced inside aquariumcube. Resizing will be happen
accordingly to its placeholder object (aquariumcube), see chapter
Scaling.
-6. Embedding text
-
7. List of XR URI Fragments
8. Security Considerations
@@ -221,4 +369,24 @@ Internet-Draft XR Fragments August 2023
-van Kammen Expires 3 March 2024 [Page 4]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+van Kammen Expires 4 March 2024 [Page 7]
diff --git a/doc/RFC_XR_Fragments.xml b/doc/RFC_XR_Fragments.xml
index 21b6f3a..88f88ad 100644
--- a/doc/RFC_XR_Fragments.xml
+++ b/doc/RFC_XR_Fragments.xml
@@ -15,19 +15,15 @@ The specification promotes spatial addressibility, sharing, navigation, query-in
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies like URI Fragments & visual-meta .
-
-
-
-
Introduction
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
-However, thru the lens of authorina,g their lowest common denominator is still: plain text.
+However, thru the lens of authoring their lowest common denominator is still: plain text.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:
- addressibility & navigation of 3D objects:
URI Fragments + (src/href) metadata
-- addressibility & navigation of text objects:
visual-meta
+- bi-directional links between text and spatial objects:
visual-meta
@@ -41,6 +37,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
src: a (HTML-piggybacked) metadata-attribute of a 3D object which instances content
href: a (HTML-piggybacked) metadata-attribute of a 3D object which links to content
query: an URI Fragment-operator which queries object(s) from a scene (#q=cube)
+visual-meta : metadata appended to text which is only indirectly visible/editable in XR.
{::boilerplate bcp14-tagged}
@@ -48,13 +45,17 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
Navigating 3D
Here's an ascii representation of a 3D scene-graph which contains 3D objects (◻) and their metadata:
- index.gltf
- │
- ├── ◻ buttonA
- │ └ href: #pos=1,0,1&t=100,200
- │
- └── ◻ buttonB
- └ href: other.fbx
+ +--------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ buttonA |
+ | │ └ href: #pos=1,0,1&t=100,200 |
+ | │ |
+ | └── ◻ buttonB |
+ | └ href: other.fbx |
+ | |
+ +--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the buttonA and buttonB.
@@ -62,37 +63,180 @@ In case of buttonA the end-user will be teleported to another location
replace the current scene with a new one (other.fbx).
-Navigating text
-TODO
-
-
Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects (◻) which embeds remote & local 3D objects (◻) (without) using queries:
- +------------------------------------------------------------+ +---------------------------+
- | | | |
- | index.gltf | | rescue.com/aquarium.gltf |
- | │ | | │ |
- | ├── ◻ canvas | | └── ◻ fishbowl |
- | │ └ src: painting.png | | ├─ ◻ bassfish |
- | │ | | └─ ◻ tuna |
- | ├── ◻ aquariumcube | | |
- | │ └ src: ://rescue.com/fish.gltf#q=bassfish%20tuna | +---------------------------+
- | │ |
- | ├── ◻ bedroom |
- | │ └ src: #q=canvas |
- | │ |
- | └── ◻ livingroom |
- | └ src: #q=canvas |
- | |
- +------------------------------------------------------------+
+ +--------------------------------------------------------+ +-------------------------+
+ | | | |
+ | index.gltf | | ocean.com/aquarium.fbx |
+ | │ | | │ |
+ | ├── ◻ canvas | | └── ◻ fishbowl |
+ | │ └ src: painting.png | | ├─ ◻ bass |
+ | │ | | └─ ◻ tuna |
+ | ├── ◻ aquariumcube | | |
+ | │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
+ | │ |
+ | ├── ◻ bedroom |
+ | │ └ src: #q=canvas |
+ | │ |
+ | └── ◻ livingroom |
+ | └ src: #q=canvas |
+ | |
+ +--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects painting.png onto the (plane) object called canvas (which is copy-instanced in the bed and livingroom).
-Also, after lazy-loading rescue.com/aquarium.gltf, only the queried objects bassfish and tuna will be instanced inside aquariumcube.
+Also, after lazy-loading ocean.com/aquarium.gltf, only the queried objects bass and tuna will be instanced inside aquariumcube.
Resizing will be happen accordingly to its placeholder object (aquariumcube), see chapter Scaling.
Embedding text
+Text in XR has to be unobtrusive, for readers as well as authors.
+We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched afterwards (lazy metadata).
+Therefore, XR Fragment-compliant text will just be plain text, and not yet-another-markuplanguage.
+In contrast to markup languages, this means humans need to be always served first, and machines later.
+Basically, XR interfaces work best when direct feedbackloops between unobtrusive text and humans are guaranteed.
+
In the next chapter you can see how XR Fragments enjoys hasslefree rich text, by supporting visual-meta (data).
+
+Default Data URI mimetype
+The XR Fragment specification bumps the traditional default browser-mimetype
+text/plain;charset=US-ASCII
+to:
+text/plain;charset=utf-8;visual-meta=1
+This means that visual-meta (data) can be appended to plain text without being displayed.
+
+URL and Data URI
+
+ +--------------------------------------------------------------+ +------------------------+
+ | | | author.com/article.txt |
+ | index.gltf | +------------------------+
+ | │ | | |
+ | ├── ◻ article_canvas | | Hello friends. |
+ | │ └ src: ://author.com/article.txt | | |
+ | │ | | @{visual-meta-start} |
+ | └── ◻ note_canvas | | ... |
+ | └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
+ | |
+ | |
+ +--------------------------------------------------------------+
+
+The enduser will only see welcome human rendered spatially.
+The beauty is that text (AND visual-meta) in Data URI is saved into the scene, which also promotes rich copy-paste.
+In both cases will the text get rendered immediately (onto a plane geometry, hence the name '_canvas').
+The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).
+NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru src, it is just that text/plain;charset=utf-8;visual-meta=1 is the default.
+
The mapping between 3D objects and text (src-data) is simple:
+Example:
+
+ +------------------------------------------------------------------------------------+
+ | |
+ | index.gltf |
+ | │ |
+ | ├── ◻ AI |
+ | │ └ class: tech |
+ | │ |
+ | └ src:`data:@{visual-meta-start} |
+ | @{glossary-start} |
+ | @entry{ |
+ | name="AI", |
+ | alt-name1 = "Artificial Intelligence", |
+ | description="Artificial intelligence", |
+ | url = "https://en.wikipedia.org/wiki/Artificial_intelligence", |
+ | } |
+ | @entry{ |
+ | name="tech" |
+ | alt-name1="technology" |
+ | description="when monkeys start to play with things" |
+ | }` |
+ +------------------------------------------------------------------------------------+
+
+Attaching visualmeta as src metadata to the (root) scene-node hints the XR Fragment browser.
+3D object names and classes map to name of visual-meta glossary-entries.
+This allows rich interaction and interlinking between text and 3D objects:
+
+
+- When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.
+- When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.
+
+
+
+
+
+HYPER copy/paste
+The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.
+XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
+Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:
+
+
+- time/space: 3D object (current animation-loop)
+- text: Text object (including visual-meta if any)
+- interlinked: Collected objects by visual-meta tag
+
+
+
+XR Fragment queries
+Include, exclude, hide/shows objects using space-separated strings:
+
+
+- #q=cube
+- #q=cube -ball_inside_cube
+- #q=* -sky
+- #q=-.language .english
+- #q=cube&rot=0,90,0
+- #q=price:>2 price:<5
+
+It's simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:
+
+
+- queries are only executed when <b>embedded</b> in the asset/scene (thru src). This is to prevent sharing of scene-tampered URL's.
+- search words are matched against 3D object names or metadata-key(values)
+- # equals #q=*
+- words starting with . (.language) indicate class-properties
+
+*(*For example**: #q=.foo is a shorthand for #q=class:foo, which will select objects with custom property class:foo. Just a simple #q=cube will simply select an object named cube.
+
+
+- see
an example video here
+
+
+including/excluding
+|''operator'' | ''info'' |
+|* | select all objects (only allowed in src custom property) in the <b>current</b> scene (<b>after</b> the default [[predefined_view|predefined_view]] # was executed)|
+|- | removes/hides object(s) |
+|: | indicates an object-embedded custom property key/value |
+|. | alias for class: (.foo equals class:foo |
+|> <| compare float or int number|
+|/ | reference to root-scene.
+Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])
+#q=-/cube hides object cube only in the root-scene (not nested cube objects)
+ #q=-cube hides both object cube in the root-scene <b>AND</b> nested skybox objects |
+» example implementation
+» example 3D asset
+» discussion
+
+
+
+Query Parser
+Here's how to write a query parser:
+
+
+- create an associative array/object to store query-arguments as objects
+- detect object id's & properties foo:1 and foo (reference regex: /^.*:[><=!]?/ )
+- detect excluders like -foo,-foo:1,-.foo,-/foo (reference regex: /^-/ )
+- detect root selectors like /foo (reference regex: /^[-]?\// )
+- detect class selectors like .foo (reference regex: /^[-]?class$/ )
+- detect number values like foo:1 (reference regex: /^[0-9\.]+$/ )
+- expand aliases like .foo into class:foo
+- for every query token split string on :
+- create an empty array rules
+- then strip key-operator: convert "-foo" into "foo"
+- add operator and value to rule-array
+- therefore we we set id to true or false (false=excluder -)
+- and we set root to true or false (true=/ root selector is present)
+- we convert key '/foo' into 'foo'
+- finally we add the key/value to the store (store.foo = {id:false,root:true} e.g.)
+
+An example query-parser (which compiles to many languages) can be found here
+
List of XR URI Fragments
@@ -110,6 +254,6 @@ Resizing will be happen accordingly to its placeholder object (aquariumcube<
TODO acknowledge.
-
+
diff --git a/doc/generate.sh b/doc/generate.sh
index 565c57c..5d1cf09 100755
--- a/doc/generate.sh
+++ b/doc/generate.sh
@@ -3,5 +3,5 @@ set -e
mmark RFC_XR_Fragments.md > RFC_XR_Fragments.xml
xml2rfc --v3 RFC_XR_Fragments.xml # RFC_XR_Fragments.txt
-mmark --html RFC.template.md | grep -vE '()' > RFC_XR_Fragments.html
+mmark --html RFC_XR_Fragments.md | grep -vE '()' > RFC_XR_Fragments.html
#sed 's|visual-meta|visual-meta|g' -i RFC_XR_Fragments.html