LAJOIE et al.: DOOR-SLAM: DISTRIBUTED, ONLINE, AND OUTLIER RESILIENT SLAM FOR ROBOTIC TEAMS 1663
[3] S. Choudhary, L. Carlone, C. Nieto, J. Rogers, H. Christensen, and
F. Dellaert, “Distributed mapping with privacy and communication con-
straints: Lightweight algorithms and object-based models,” Int. J. Robot.
Res., vol. 36, no. 12, pp. 1286–1311, 2017.
[4] N. Sünderhauf andP. Protzel,“Switchable constraints forrobustpose graph
SLAM,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2012, pp. 1879–
1884.
[5] P. Agarwal, G. D. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard,
“Robust map optimization using dynamic covariance scaling,” in Proc.
IEEE Int. Conf. Robot. Autom., 2013, pp. 62–69.
[6] E. Olson and P. Agarwal, “Inference on networks of mixtures for robust
robot mapping,” Int. J. Robot. Research, vol. 32, no. 7, pp. 826–840, 2013.
[7] L. Yasir, G. Huang, J. Leonard, and J. Neira, “An online sparsity-
cognizant algorithm for visual navigation,” in Proc. Robot. Sci. Syst., 2014,
pp. 36–44.
[8] P. Lajoie, S. Hu, G. Beltrame, and L. Carlone, “Modeling percep-
tual aliasing in SLAM via discrete-continuous graphical models,” IEEE
Robot. Autom. Lett., vol. 4, no. 2, pp. 1232–1239, Apr. 2019, extended
ArXiv version: https://arxiv.org/pdf/1810.11692.pdf, Supplemental Mate-
rial: https://www.dropbox.com/s/vupak65wi75yzbl/2018j-RAL-DCGM-
supplemental.pdf?dl=0
[9] T. Cieslewski, S. Choudhary, and D. Scaramuzza, “Data-efficient decen-
tralized visual SLAM,” in Proc. IEEE Int. Conf. Robot. Autom., 2018,
pp. 2466–2473.
[10] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD:
CNN architecture for weakly supervised place recognition,” in Proc. IEEE
Conf. Comput. Vision Pattern Recogni., 2016, pp. 5297–5307.
[11] A. Geiger, P. Lenz, and R. Urtasun, “Arewe ready for autonomous driving?
the KITTI vision benchmark suite,” in Proc. IEEE Conf. Comput. Vision
Pattern Recognit., Providence, USA, Jun. 2012, pp. 3354–3361.
[12] L. Andersson and J. Nygards, “C-SAM: Multi-robot SLAM using square
root information smoothing,” in Proc. IEEE Int. Conf. Robot. Autom.,
2008, pp. 2798–2805.
[13] B. Kim et al., “Multiple relative pose graphs for robust cooperative
mapping,” in Proc. IEEE Int. Conf. Robot. Autom., Anchorage, Alaska,
May 2010, pp. 3185–3192.
[14] T. Bailey, M. Bryson, H. Mu, J. Vial, L. McCalman, and H. Durrant-
Whyte, “Decentralised cooperative localisationfor heterogeneous teamsof
mobile robots,” in Proc. IEEE Int. Conf. Robot. Autom., Shanghai, China,
May 2011, pp. 2859–2865.
[15] M. Lazaro, L. Paz, P. Pinies, J. Castellanos, and G. Grisetti, “Multi-robot
SLAM using condensed measurements,” in Proc. IEEE Int. Conf. Robot.
Autom., 2011, pp. 1069–1076.
[16] J. Dong, E. Nelson, V. Indelman, N. Michael, and F. Dellaert, “Distributed
real-time cooperativelocalization and mapping using an uncertainty-aware
expectation maximization approach,” in Proc. IEEE Int. Conf. Robot.
Autom., Seattle, WA, USA, May 2015, pp. 5807–5814.
[17] R. Aragues, L. Carlone, G. Calafiore, and C. Sagues, “Multi-agent local-
ization from noisy relative pose measurements,” in Proc. IEEE Int. Conf.
Robot. Autom., 2011, pp. 364–369.
[18] A. Cunningham, M. Paluri, and F. Dellaert, “DDF-SAM: Fully distributed
SLAM using constrained factor graphs,” in Proc. IEEE/RSJ Int. Conf.
Intell. Robots Syst., 2010, pp. 3025–3030.
[19] A. Cunningham, V. Indelman, and F. Dellaert, “DDF-SAM 2.0: Consistent
distributed smoothing and mapping,” in Proc. IEEE Int. Conf. Robot.
Autom., Karlsruhe, Germany, May 2013, pp. 5220–5227.
[20] W. Wang, N. Jadhav, P. Vohs, N. Hughes, M. Mazumder, and S. Gil,
“Active rendezvous for multi-robot pose graph optimization using sensing
over Wi-Fi,” 2019, arXiv: 1907.05538.
[21] M. Fischler and R. Bolles, “Random sample consensus: A paradigm for
model fitting with application to image analysis and automated cartogra-
phy,” Commun. ACM, vol. 24, pp. 381–395, 1981.
[22] J. Neira and J. Tardós, “Data association in stochastic mapping using
the joint compatibility test,”
IEEE Trans. Robot. Autom., vol. 17, no. 6,
pp. 890–897, Dec. 2001.
[23] M. Bosse, G. Agamennoni, and I. Gilitschenski, “Robust estimation and
applications in robotics,” Found. Trends Robot., vol. 4, no. 4, pp. 225–269,
2016.
[24] R. Hartley, J. Trumpf, Y. Dai, and H. Li, “Rotation averaging,” Int. J.
Comput. Vision, vol. 103, no. 3, pp. 267–305, 2013.
[25] M. Pfingsthorn and A. Birk, “Simultaneous localization and mapping with
multimodal probability distributions,” Int. J. Robot. Res., vol. 32, no. 2,
pp. 143–171, 2013.
[26] M. Pfingsthorn and A. Birk–, “Generalized graph SLAM: Solving local
and global ambiguities through multimodal and hyperedge constraints,”
Int. J. Robot. Res., vol. 35, no. 6, pp. 601–630, 2016.
[27] L. Carlone and G. Calafiore, “Convex relaxations for pose graph optimiza-
tion with outliers,” IEEE Robot. Autom. Lett., vol. 3, no. 2, pp. 1160–1167,
Apr. 2018.
[28] L. Carlone, A. Censi, and F. Dellaert, “Selecting good measure-
ments via
1
relaxation: A convex approach for robust estimation
over graphs,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst.,
2014, pp. 2667–2674, https://www.dropbox.com/s/7f304d5ag245ie4/
2014c-IROS-outlierRejection.pdf?dl=0
[29] M. Graham, J. How, and D. Gustafson, “Robust incremental SLAM with
consistency-checking,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst.,
Sep. 2015, pp. 117–124.
[30] A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic
representation of the spatial envelope,” Int. J. Comput. Vision, vol. 42,
pp. 145–175, 2001.
[31] I. Ulrich and I. Nourbakhsh, “Appearance-based place recognition
for topological localization,” in Proc. IEEE Int. Conf. Robot. Autom.,
Apr. 2000, vol. 2, pp. 1023–1029.
[32] D. Lowe, “Object recognition from local scale-invariant features,” in Proc.
Int. Conf. Comput. Vision, 1999, pp. 1150–1157.
[33] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,”
in Proc. Eur. Conf. Comput. Vision, 2006.
[34] J. Sivic and A. Zisserman, “Video google: A text re- trieval approach to
object matching in videos,” in Proc. Int. Conf. Comput. Vision, 2003, pp.
1470–1477.
[35] N. Suenderhauf, S. Shirazi, F. Dayoub, B. Upcroft, and M. Milford, “On
the performance of ConvNet features for place recognition,” in Proc.
IEEE/RSJ Int. Conf. Intell. Robots Syst., Sep. 2015, pp. 4297–4304.
[36] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “Object retrieval
with large vocabularies and fast spatial matching,” in Proc. IEEE Conf.
Comput. Vision Pattern Recognit., Jun. 2007, pp. 1–8.
[37] D. Scaramuzza and F. Fraundorfer, “Visual odometry [tutorial],” IEEE
Robot. Autom. Mag., vol. 18, no. 4, pp. 80–92, Dec. 2011.
[38] D. Tardioli, E. Montijano, and A. R. Mosteo, “Visual data association in
narrow-bandwidth networks,” in Proc. IEEE/RSJ Int, Conf. Intell. Robots
Syst., Sep. 2015, pp. 2572–2577.
[39] T. Cieslewski and D. Scaramuzza, “Efficient decentralized visual place
recognition from full-image descriptors,” in Proc. Int. Symp. Multi-Robot
Multi-Agent Syst., Dec. 2017, pp. 78–82.
[40] Y. Tian, K. Khosoussi, M. Giamou, J. P. How, and J. Kelly, “Near-optimal
budgeted data exchange for distributed loop closure detection,” Robot. Sci.
Syst., pp. 71–80, 2018.
[41] Y. Tian, K. Khosoussi, and J. P. How, “A resource-aware approach to col-
laborative loop closure detection with provable performance guarantees,”
Jul. 2019, arXiv:1907.04904 [cs].
[42] M. Giamou, K. Khosoussi, and J. P. How, “Talk resource-efficiently to me:
Optimal communication planning for distributed loop closure detection,”
in Proc. IEEE Int. Conf. Robot. Autom., 2018, pp. 3841–3848.
[43] C. Pinciroli and G. Beltrame, “Buzz: An extensibleprogramming language
for heterogeneous swarm robotics,” in Proc. IEEE/RSJ Int. Conf. Intell.
Robots Syst., Oct. 2016, pp. 3794–3800.
[44] M. Labbe and F. Michaud, “RTAB-Map as an open-source lidar and
visual simultaneous localization and mapping library for large-scale and
long-term online operation,” J. Field Robot., vol. 36, no. 2, pp. 416–446,
2019.
[45] G. Bradski, “The OpenCV library,” Dr. Dobb’s J. Softw. Tools, 2000.
[46] R. Smith and P. Cheeseman, “On the representation and estimation of
spatial uncertainty,” Int. J. Robot. Res., vol. 5, no. 4, pp. 56–68, 1987.
[47] DARPA, “DARPA subterranean challenge,” 2019. [Online]. Available:
https://www.subtchallenge.com/, 2019, Accessed: Sep. 9, 2019.
[48] J. Shi and C. Tomasi, “Good features to track,” in Proc. IEEE Conf.
Comput. Vision Pattern Recognit., 1994, pp. 593–600.
[49] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient
alternative to SIFT or SURF,” in Proc. Int. Conf. Comput. Vision, 2011,
pp. 2564–2571.
[50] F. Dellaert, “Factor graphs and GTSAM: A hands-on introduction,” Geor-
gia Institute Technol, Atlanta, GA USA, Tech. Rep. GT-RIM-CP&R-2012-
002, Sep. 2012.
[51] C. Pinciroli et al., “ARGoS: A modular, parallel, multi-engine simulator
for multi-robot systems,” Swarm Intell., vol. 6, no. 4, pp. 271–295, 2012.
[52] P. Lajoie, B. Ramtoula, Y. Chang, L. Carlone, and G. Beltrame,
“DOOR-SLAM: Distributed, online, and outlier resilient SLAM for
robotic teams,” Tech. Rep., Dep. comput. software eng., École
Polytechnique de Montréal, Montreal, QC, Canada, 2019, arXiv
preprint: 1909.12198, https://arxiv.org/pdf/1909.12198.pdf,Supplemental
Material: https://www.dropbox.com/s/wgoqhiz8b96dl88/supplemental_
material.pdf?dl=0.
Authorized licensed use limited to: Worcester Polytechnic Institute. Downloaded on April 06,2021 at 06:31:20 UTC from IEEE Xplore. Restrictions apply.