Visual Paths — Navigation

Visual Paths

We define a “visual path” as the video sequence captured by a moving person in executing a journey along a particular physical path.

Diagram of Visual Paths contributed by users

A sample path (Corridor 1, C1) illustrating the multiple passes through the same space. Each of these passes represents a sequence that is either stored in a database, or represents the queries that are submitted against previous journeys. In the assistive context, the user at point A could be a blind or partially sighted user, and he or she would benefit from solutions to the association problem of a query journey relative to previous “journey experiences” along roughly the same path, crowdsourced by N users that may be sighted.

RSM Dataset details
  • 60 videos
  • 6 corridors
  • 3.05 km
  • 10 total passes/corridor
    • 5 passes/corridor with Nexus 4 @1920 x 1080, 1280 x 720, 24-30 fps
    • 5 passes/corridor with Google Glass @ 1280 x 720, 30 fps
  • 90, 302 frames with positional ground-truth
  • Table 1 summarizes the acquisition. As can be seen, the length of the sequences varies within some corridors, due to a combination of different walking speeds and/or different frame rates. Lighting also varied, due to a combination of daylight/night-time acquisitions, and occasional prominent windows that represent strong lighting sources in certain parts of some corridors. Differences were also observable in some videos from one pass to another, due to the presence of activities (such as cleaning, shifting of furniture) and the occasional appearance of people.
A summary of the dataset with thumbnails

Table 1. A summary of the dataset with thumbnails. Also available as an online spreadsheet.

Synthetic RSM Dataset details

Synthetic corridor generated using the Unity engine.

  • 7 passes
Browse and Download

RSM Dataset: Download

Synthetic RSM Dataset: Download


The localisation from visual paths code is on Bitbucket:

If you use our code, please cite our respective publications (see below)

  • J. Rivera-Rubio, I. Alexiou, and A. A. Bharath, “Appearance-based indoor localization: a comparison of patch descriptor performance,” Pattern recognition letters, vol. 66, pp. 109-117, 2015. doi:10.1016/j.patrec.2015.03.003
    [BibTeX] [Download PDF]
    title = "Appearance-based indoor localization: A comparison of patch descriptor performance ",
    journal = "Pattern Recognition Letters ",
    volume = "66",
    number = "",
    pages = "109 - 117",
    year = "2015",
    note = "Pattern Recognition in Human Computer Interaction ",
    issn = "0167-8655",
    doi = "10.1016/j.patrec.2015.03.003",
    url = "",
    author = "Jose Rivera-Rubio and Ioannis Alexiou and Anil A. Bharath",
    keywords = "Visual localization",
    keywords = "Descriptors",
    keywords = "Human–computer interaction",
    keywords = "Assistive devices "

  • J. Rivera-Rubio, I. Alexiou, L. Dickens, R. Secoli, E. Lupu, and A. A. Bharath, “Associating locations from wearable cameras,” in Proceedings of the british machine vision conference, 2014, p. 13 pages.
    [BibTeX] [Download PDF]
    title = {Associating locations from wearable cameras},
    author = {Rivera-Rubio, Jose and Alexiou, Ioannis and Dickens, Luke and Secoli, Riccardo and Lupu, Emil and Bharath, Anil A},
    year = {2014},
    booktitle = {Proceedings of the British Machine Vision Conference},
    publisher = {BMVA Press},
    editors = {Valstar, Michel and French, Andrew and Pridmore, Tony},
    pages = {13 pages},
    keywords = {CV}


The dataset hosting is supported by the EPSRC V&L Net Pump-Priming Grant 2013-1 awarded to Dr Riccardo Secoli, Jose Rivera-Rubio and Anil A. Bharath.