Presentation
Learning Multimodal Affinities for Textual Editing in Images
SessionTechnical Papers
Event Type
Technical Paper
Research & Education
Ultimate Supporter
Ultimate Attendee
Exhibitor Ultimate
Time
Location
DescriptionWe present an unsupervised method to learn multimodal affinities of textual entities in document images, considering their style, syntax, semantics, and geometry, and we demonstrate its applicability to various editing operations.