You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

20 KiB

IDEA: Add a strictValidation flag that strictly validates all inputs on the client library, to catch errors early? Default enabled, allow disabling for eg. people working on protocol extensions IDEA: Bot(?) library with E2EE support

Packages:

  • create-session (for each auth method! produce a {homeserver, accessToken, protocolVersions, unstableFeatures} object named a 'session')
  • sync-to-events (parse sync response, turn into list of events)
  • event-stream (return ppstream with events that keeps polling /sync)
  • various method packages for individual operations, that take a 'session' as their argument
  • client, which combines all(?) of the above into a single API... or maybe just the method packages and event-stream? and let a client be initialized with a 'session' object from elsewhere

TODO: Write a package for generating a filename based on description + mimetype? - If description is already a filename with a valid extension for the mimetype (or no mimetype), return as-is - Otherwise, if mimetype is set, return description + primary extension for mimetype - If neither a valid filename nor a mimetype is set, return undefined

How to deal with event validation: it's probably easiest to have a two-pass validatorm, where the first pass does any normalization and critical validation, and the second pass does "fail-able" validation - ie. validators that can safely fail and should just produce a warning.

TODO: Figure out a failsafe mechanism to prevent unencrypted file uploads in service of an encrypted m.file/m.video/etc. event. Maybe default-reject unencrypted files in makeEncryptedMessage wrapper, unless a permitUnencryptedAttachments flag is set (eg. for forwarding attachments from an unencrypted room)? FIXME: Split makeMessage stuff out of send* modules once we need to encrypt events for E2EE TODO: Find a way to reduce validation schema duplication for this FIXME: Gracefully handle 500 M_UNKNOWN: Internal server error (use exponential backoff?)

Reading magic bytes from a blob: https://gist.github.com/topalex/ad13f76150e0b36de3c4a3d5ba8dc63a


PaginatedResource (cursor) interface

items: the items in the current chunk getNext({ [limit] }): retrieves the next chunk of items, in the 'direction of travel' getPrevious({ [limit] }): retrieves the previous chunk of items, in the 'direction of travel'

getNext and getPrevious may reject with a NoMoreItems error if the end of pagination in that direction has been reached; either immediately, or after making another speculative request, depending on the underlying technical requirements

not every implementation may support the limit override; in those cases, there is either no fixed limit, or the limit can only be set during the initial request


messages endpoint

Forwards: Request initial set somehow Request again, dir f, from Response: start:


parjs combinators

Characters

digit ASCII(?) digit in hex ASCII(?) digit in base 16 (hex) uniDecimal unicode digit in base 10 (decimal)

letter ASCII letter uniLetter unicode letter

lower ASCII lower-case letter uniLower unicode lower-case letter

upper ASCII upper-case letter uniUpper unicode upper-case letter

space single ASCII space spaces1 one or more ASCII spaces (ie. space+) whitespace zero or more ASCII "whitespace characters", whatever that means

anyChar a single character of any kind anyCharOf a single character in the specified <set: string> noCharOf a single character NOT in the specified <set: string>

Control flow

replaceState apply the specified parser, but within an isolated scope (which may also be specified as a function(parentState) -> scopedState) backtrack apply the specified parser and return the result, but do not advance the parser position later placeholder parser that does nothing, and on which the .init method needs to be called later, to (mutably!) fill in the actual logic

between apply the <before: parser>, and then the specified parser, and then <after: parser>, and return the result of the middle one thenq apply the specified parser and then the parser, and return the result of the former qthen apply the specified parser and then the parser, and return the result of the latter then apply the specified parser and then the parser, and return array containing the results of the [former, latter] thenPick apply the specified parser and then call the <selector: function(result, userState)>, which dynamically returns the next parser to apply

Boolean operations

not invert the result of the specified parser or attempt each of the specified <parsers ...> until one succeeds or all fail

Repeats

exactly apply the specified parser times, and return an array of results many apply the specified parser until it runs out of matches, and return an array of results manyBetween apply <start: parser>, then the specified parser, until the <end: parser> is encountered, then call <projection: function(results[], end, userState)> manySepBy apply the specified parser until it runs out of matches, and expect each match to be separated with <delimiter: parser>; return an array of results manyTill apply the specified parser until the <end: parser> is encountered, and return the results of the specified parser

Result and state handling

must apply the specified parser, and expect its result to pass <predicate: function(result)> mustCapture apply the specified parser, and expect the parser position to have advanced

each apply the specified parser, then call <projection: function(result, userState)>, and return the original result (userState may be mutated) map apply the specified parser, then call <projection: function(result, userState)>, and return the result of the projection function mapConst apply the specified parser, then throw away the result and return instead maybe apply the specified parser, and return its result if it succeeds, or otherwise (ie. a default value)

flatten like map((result) => result.flat()) stringify like map((result) => String(result)), but with extra stringification logic

Error handling

recover apply the specified parser, and if it hard-fails, call <recovery: function(failureState)> and, if a new parserState is returned from it, return that instead of the original


Message history access patterns:

  • Window seek (fetch N messages before/after marker/ID)
  • Insert N messages before/after marker/ID
  • Update (message with given ID)
  • Purge (messages that have not been accessed for N time / that have been least recently accessed)

Each 'message' should have an internal list of all applicable events which modify it, eg. edits and reactions

Maybe have the data structure expose an async 'seek' API which will transparently fetch messages if not locally available?


/sync response

Sync response:

{
	// Required. The batch token to supply in the since param of the next /sync request.
	next_batch: "token",
	// Updates to rooms.
	rooms: { // Rooms
		// The rooms that the user has joined.
		join:  {
			"!room_id:example.com": { // Joined Room
				// Information about the room which clients may need to correctly render it to users.
				summary: { // RoomSummary
					// The users which can be used to generate a room name if the room does not have one. Required if the room's m.room.name or m.room.canonical_alias state events are unset or empty.
					"m.heroes": ["@foo:example.com", "@bar:example.com"],
					// The number of users with membership of join, including the client's own user ID. If this field has not changed since the last sync, it may be omitted. Required otherwise.
					"m.joined_member_count": 10,
					// The number of users with membership of invite. If this field has not changed since the last sync, it may be omitted. Required otherwise.
					"m.invited_member_count": 10
				},
				// Updates to the state, between the time indicated by the since parameter, and the start of the timeline (or all state up to the start of the timeline, if since is not given, or full_state is true).
				state: { // State
					// List of events.
					events: [ stateEvent, stateEvent, stateEvent ]
				},
				// The timeline of messages and state changes in the room.
				timeline: { // Timeline
					// List of events.
					events: [ roomEvent, roomEvent, roomEvent ],
					// True if the number of events returned was limited by the limit on the filter.
					limited: true,
					// A token that can be supplied to the from parameter of the rooms/{roomId}/messages endpoint.
					prev_batch: "string"
				},
				// The ephemeral events in the room that aren't recorded in the timeline or state of the room. e.g. typing.
				ephemeral: { // Ephemeral
					// List of events.
					events: [ event, event, event ]
				},
				// The private data that this user has attached to this room.
				account_data: { // Account Data
					// List of events.
					events: [ event, event, event ]
				},
				// Counts of unread notifications for this room. Servers MUST include the number of unread notifications in a client's /sync stream, and MUST update it as it changes. Notifications are determined by the push rules which apply to an event. When the user updates their read receipt (either by using the API or by sending an event), notifications prior to and including that event MUST be marked as read.
				unread_notifications: { // Unread Notification Counts
					// The number of unread notifications for this room with the highlight flag set
					highlight_count: 10,
					// The total number of unread notifications for this room
					notification_count: 10
				},
			}
		},
		// The rooms that the user has been invited to.
		invite: {
			"!room_id:example.com": { // Invited Room
				// The state of a room that the user has been invited to. These state events may only have the sender, type, state_key and content keys present. These events do not replace any state that the client already has for the room, for example if the client has archived the room. Instead the client should keep two separate copies of the state: the one from the invite_state and one from the archived state. If the client joins the room then the current state will be given as a delta against the archived state not the invite_state.
				invite_state: { // InviteState
					// The StrippedState events that form the invite state.
					events: [ strippedEvent, strippedEvent, strippedEvent ]
				}
			}
		},
		// The rooms that the user has left or been banned from.
		leave: {
			"!room_id:example.com": { // Left Room
				// The state updates for the room up to the start of the timeline.
				state: { // State
					// List of events.
					events: [ stateEvent, stateEvent, stateEvent ]
				},
				// The timeline of messages and state changes in the room up to the point when the user left.
				timeline: { // Timeline
					// List of events.
					events: [ roomEvent, roomEvent, roomEvent ],
					// True if the number of events returned was limited by the limit on the filter.
					limited: true,
					// A token that can be supplied to the from parameter of the rooms/{roomId}/messages endpoint.
					prev_batch: "string"
				},
				// The private data that this user has attached to this room.
				account_data: { // Account Data
					// List of events.
					events: [ event, event, event ]
				},
			}
		}
	},
	// The updates to the presence status of other users.
	presence: { // Presence
		// List of events.
		events: [ event, event, event ]
	},
	// The global private data created by this user.
	account_data: { // Account Data
		// List of events.
		events: [ event, event, event ]
	},
	// Optional. Information on the send-to-device messages for the client device.
	to_device: { // ToDevice
		// List of send-to-device messages.
		events: [ toDeviceEvent, toDeviceEvent, toDeviceEvent ]
	},
	// Optional. Information on e2e device updates. Note: only present on an incremental sync.
	device_lists: { // DeviceLists
		// List of users who have updated their device identity keys, or who now share an encrypted room with the client since the previous sync response.
		changed: [ "string", "string", "string" ],
		// List of users with whom we do not share any encrypted rooms anymore since the previous sync response.
		left: [ "string", "string", "string" ]
	},
	// Optional. For each key algorithm, the number of unclaimed one-time keys currently held on the server for this device.
	device_one_time_keys_count: {
		[algorithmName]: 10,
		[algorithmName]: 10
	}
}

Event:

{
	// Required. The fields in this object will vary depending on the type of event. When interacting with the REST API, this is the HTTP body.
	content: varyingObject,
	// Required. The type of event. This SHOULD be namespaced similar to Java package naming conventions e.g. 'com.example.subdomain.event.type'
	type: "string",
}

Room event:

{
	... Event, // inherits
	// Required. The globally unique event identifier.
	event_id: "string",
	// Required. The type of event. This SHOULD be namespaced similar to Java package naming conventions e.g. 'com.example.subdomain.event.type'
	type: "string",
	// Required. Contains the fully-qualified ID of the user who sent this event.
	sender: "string",
	// Required. The fields in this object will vary depending on the type of event. When interacting with the REST API, this is the HTTP body.
	content: varyingObject,
	// Required. Timestamp in milliseconds on originating homeserver when this event was sent.
	origin_server_ts: 10,
	// Required. The ID of the room associated with this event. Will not be present on events that arrive through /sync, despite being required everywhere else.
	room_id: "string",
	// Contains optional extra information about the event.
	unsigned: { // UnsignedData
		// The time in milliseconds that has elapsed since the event was sent. This field is generated by the local homeserver, and may be incorrect if the local time on at least one of the two servers is out of sync, which can cause the age to either be negative or greater than it actually is.
		age: 10,
		// Optional. The event that redacted this event, if any.
		redacted_because: event,
		// The client-supplied transaction ID, if the client being given the event is the same one which sent it.
		transaction_id: "string"
	}
}

State event:

{
	... RoomEvent, // inherits
	// Required. A unique key which defines the overwriting semantics for this piece of room state. This value is often a zero-length string. The presence of this key makes this event a State Event. State keys starting with an @ are reserved for referencing user IDs, such as room members. With the exception of a few events, state events set with a given user's ID as the state key MUST only be set by that user.
	state_key: "string"
	// Optional. The previous content for this event. If there is no previous content, this key will be missing.
	prev_content: varyingObject,
}

Stripped event:

{
	// Required. The type for the event.
	type: "string",
	// Required. The content for the event.
	content: varyingObject,
	// Required. The state_key for the event.
	state_key: "string",
	// Required. The sender for the event.
	sender: "string"
}

To-device event:

{
	// The content of this event. The fields in this object will vary depending on the type of event.
	content: varyingObject,
	// The Matrix user ID of the user who sent this event.
	sender: "string",
	// The type of event.
	type: "string"
}

Filter creation request:

{
	// List of event fields to include. If this list is absent then all fields are included. The entries may include '.' characters to indicate sub-fields. So ['content.body'] will include the 'body' field of the 'content' object. A literal '.' character in a field name may be escaped using a '\'. A server may include more fields than were requested.
	event_fields: [ "field_path", "field_path", "field_path" ],
	// The format to use for events. 'client' will return the events in a format suitable for clients. 'federation' will return the raw event as received over federation. The default is 'client'.
	event_format: ( "client" || "federation" ),
	// The presence updates to include.
	presence: EventFilter,
	// The user account data that isn't associated with rooms to include.
	account_data: EventFilter,
	// Filters to be applied to room data.
	room: {
		// A list of room IDs to exclude. If this list is absent then no rooms are excluded. A matching room will be excluded even if it is listed in the 'rooms' filter. This filter is applied before the filters in ephemeral, state, timeline or account_data
		not_rooms: [ "room_id", "room_id", "room_id" ],
		// A list of room IDs to include. If this list is absent then all rooms are included. This filter is applied before the filters in ephemeral, state, timeline or account_data
		rooms: [ "room_id", "room_id", "room_id" ],
		// Include rooms that the user has left in the sync, default false
		include_leave: false,
		// The state events to include for rooms.
		state: StateFilter,
		// The events that aren't recorded in the room history, e.g. typing and receipts, to include for rooms.
		ephemeral: StateFilter,
		// The message and state update events to include for rooms.
		timeline: StateFilter,
		// The per user account data to include for rooms.
		account_data: StateFilter
	}
}

EventFilter:

{
	// The maximum number of events to return.
	limit: 10,
	// A list of event types to include. If this list is absent then all event types are included. A '*' can be used as a wildcard to match any sequence of characters.
	types: [ "m.type", "m.type", "m.type" ],
	// A list of event types to exclude. If this list is absent then no event types are excluded. A matching type will be excluded even if it is listed in the 'types' filter. A '*' can be used as a wildcard to match any sequence of characters.
	not_types: [ "m.type", "m.type", "m.type" ],
	// A list of senders IDs to include. If this list is absent then all senders are included.
	senders: [ "user_id", "user_id", "user_id" ],
	// A list of sender IDs to exclude. If this list is absent then no senders are excluded. A matching sender will be excluded even if it is listed in the 'senders' filter.
	not_senders: [ "user_id", "user_id", "user_id" ],
}

StateFilter:

{
	... EventFilter, // inherits
	// A list of room IDs to include. If this list is absent then all rooms are included.
	rooms: [ "room_id", "room_id", "room_id" ],
	// A list of room IDs to exclude. If this list is absent then no rooms are excluded. A matching room will be excluded even if it is listed in the 'rooms' filter.
	not_rooms: [ "room_id", "room_id", "room_id" ],
	// If true, enables lazy-loading of membership events. Defaults to false.
	lazy_load_members: false,
	// If true, sends all membership events for all events, even if they have already been sent to the client. Does not apply unless lazy_load_members is true. Defaults to false.
	include_redundant_members: false,
	// If true, includes only events with a url key in their content. If false, excludes those events. If omitted, url key is not considered for filtering.
	contains_url: ( true || false || null )
}

Junk

// Optional. This key will only be present for state events. A unique key which defines the overwriting semantics for this piece of room state.
state_key: "string",
// The MXID of the user who sent this event.
sender: "string",
// The content of this event. The fields in this object will vary depending on the type of event.
content: varyingObject,
// Timestamp in milliseconds on originating homeserver when this event was sent.
origin_server_ts: 10,
// Information about this event which was not sent by the originating homeserver
unsigned: { // Unsigned
	// Time in milliseconds since the event was sent.
	age: 10,
	// Optional. The event that redacted this event, if any.
	redacted_because: event
	// Optional. The transaction ID set when this message was sent. This key will only be present for message events sent by the device calling this API.
	transaction_id: "string",
	// Optional. The previous content for this state. This will be present only for state events appearing in the timeline. If this is not a state event, or there is no previous content, this key will be missing.
	prev_content: varyingObject,
}

item 14

Filter:

{
    "room": {
        "state": {
            "lazy_load_members": true
        }
    }
}