diff --git a/doc/RFC_XR_Fragments.md b/doc/RFC_XR_Fragments.md
index 0e00a84..e6b2e02 100644
--- a/doc/RFC_XR_Fragments.md
+++ b/doc/RFC_XR_Fragments.md
@@ -103,39 +103,41 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
-However, thru the lens of authoring their lowest common denominator is still: plain text.
-XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:
+However, thru the lens of authoring, their lowest common denominator is still: plain text.
+XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:
1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata
-1. hasslefree tagging across text and spatial objects using BiBTeX ([visual-meta](https://visual-meta.info) e.g.)
+1. hasslefree tagging across text and spatial objects using [BiBTeX](https://en.wikipedia.org/wiki/BibTeX) ([visual-meta](https://visual-meta.info) e.g.)
> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
-# Conventions and Definitions
-
-|definition | explanation |
-|----------------------|---------------------------------------------------------------------------------------------------------------------------|
-|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
-|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
-|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
-|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
-|XR fragment | URI Fragment with spatial hints (`#pos=0,0,0&t=1,100` e.g.) |
-|src | (HTML-piggybacked) metadata of a 3D object which instances content |
-|href | (HTML-piggybacked) metadata of a 3D object which links to content |
-|query | an URI Fragment-operator which queries object(s) from a scene (`#q=cube`) |
-|visual-meta | [visual-meta](https://visual.meta.info) data appended to text which is indirectly visible/editable in XR. |
-|requestless metadata | opposite of networked metadata (RDF/HTML request-fanouts easily cause framerate-dropping, hence not used a lot in games). |
-|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
-|introspective | inward sensemaking ("I feel this belongs to that") |
-|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
-|`◻` | ascii representation of an 3D object/mesh |
-
# Core principle
-XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-to-thought feedback loops.
+XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).
-> "When a car breaks down, the ones without turbosupercharger are easier to fix"
+> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
+
+Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)).
+
+# Conventions and Definitions
+
+|definition | explanation |
+|----------------------|-------------------------------------------------------------------------------------------------------------------------------|
+|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
+|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
+|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
+|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
+|XR fragment | URI Fragment with spatial hints (`#pos=0,0,0&t=1,100` e.g.) |
+|src | (HTML-piggybacked) metadata of a 3D object which instances content |
+|href | (HTML-piggybacked) metadata of a 3D object which links to content |
+|query | an URI Fragment-operator which queries object(s) from a scene (`#q=cube`) |
+|visual-meta | [visual-meta](https://visual.meta.info) data appended to text which is indirectly visible/editable in XR. |
+|requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). |
+|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
+|introspective | inward sensemaking ("I feel this belongs to that") |
+|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
+|`◻` | ascii representation of an 3D object/mesh |
# List of URI Fragments
@@ -216,8 +218,8 @@ Resizing will be happen accordingly to its placeholder object (`aquariumcube`),
# Text in XR (tagging,linking to spatial objects)
We still think and speak in simple text, not in HTML or RDF.
-It would be funny when people would shout `