Always private
DuckDuckGo never tracks your searches.
Learn More
You can hide this reminder in Search Settings
All regions
Argentina
Australia
Austria
Belgium (fr)
Belgium (nl)
Brazil
Bulgaria
Canada (en)
Canada (fr)
Catalonia
Chile
China
Colombia
Croatia
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hong Kong
Hungary
Iceland
India (en)
Indonesia (en)
Ireland
Israel (en)
Italy
Japan
Korea
Latvia
Lithuania
Malaysia (en)
Mexico
Netherlands
New Zealand
Norway
Pakistan (en)
Peru
Philippines (en)
Poland
Portugal
Romania
Russia
Saudi Arabia
Singapore
Slovakia
Slovenia
South Africa
Spain (ca)
Spain (es)
Sweden
Switzerland (de)
Switzerland (fr)
Taiwan
Thailand (en)
Turkey
Ukraine
United Kingdom
US (English)
US (Spanish)
Vietnam (en)
Safe search: moderate
Strict
Moderate
Off
Any time
Any time
Past day
Past week
Past month
Past year
  1. [8], image labeling and retrieval [9], etc. The term multimodal fusion is used to indicate the integration of information from multiple modalities. In this work, we fuse text-, audio- and image-based models for the estimation of word semantic similarity. Two main fusion methods are employed here, namely middle and late fusion.
  2. semanticscholar.org

    This work estimates multimodal word representations via the fusion of auditory and visual modalities with the text modality through middle and late fusion of representations with modality weights assigned to each of the unimodal representations. Traditional semantic models are disembodied from the human perception and action. In this work, we attempt to address this problem by grounding ...
  3. pure.unic.ac.cy

    Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation. / Paraskevopoulos, George; Karamanolakis, Giannis; Iosif, Elias et al. 2017. Paper presented at MultiLearn Workshop, Kos island, Greece. Research output: Contribution to conference › Paper › peer-review
  4. gkaramanolakis.github.io

    gkaramanolakis.github.io

    https://gkaramanolakis.github.io

    Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation Georgios Paraskevopoulos, Giannis Karamanolakis , Elias Iosif, Aggelos Pikrakis, and Alexandros Potamianos EUSIPCO 2017, Multimodal processing, modeling and learning approaches for human-computer/robot interaction (Multi-Learn) workshop, Kos island, Greece (Oral Presentation)
  5. georgepar.github.io

    Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation Georgios Paraskevopoulos, Giannis Karamanolakis, Elias Iosif, Aggelos Pikrakis, Alexandros Potamianos MultiLearn2017: Multimodal Processing, Modelingand Learning for Human-Computer/Robot Interaction Workshop 2017. A real-time approach for gesture recognition using the ...
  6. aclanthology.org

    Abstract Multimodal learning is generally expected to make more accurate predictions than text-only analysis. Here, although various methods for fusing multimodal inputs have been proposed for sentiment analysis tasks, we found that they may be inhibiting their fusion methods, which are based on attention-based language models, from learning non-verbal modalities, because non-verbal ones are ...
  7. ieeexplore.ieee.org

    Word polysemy poses a formidable challenge in the semantic similarity task, especially for complex Chinese semantic information. However, most existing methods tend to emphasize information expansion, often overlooking the fact that the added information may be either irrelevant or only weakly correlated. In view of this, we propose a novel approach that fuses knowledge enhancement and context ...
  8. posits that words appearing in similar contexts tend to have similar meanings [7]. This hypothesis allows for representing words as vectors in a continuous space, where semantic similarity is reflected by vector proximity [8]. This shift from symbolic to distributed representations has revolution-ized NLP, enabling advancements in tasks like ...
  9. sciencedirect.com

    Sentiment is crucial to human interaction, shaping communication and decisions [1].As social media and sensor technologies evolve, multimodal sentiment analysis harnesses diverse data like text, audio, and video to accurately gauge sentiment scores [2].Prior research in multimodal sentiment analysis has primarily concentrated on facilitating interaction and integration among modalities.
  10. ieeexplore.ieee.org

    Multimodal sensors, including vision sensors and wearable sensors, offer valuable complementary information for accurate recognition tasks. Nonetheless, the heterogeneity among sensor data from different modalities presents a formidable challenge in extracting robust multimodal information amidst noise. In this article, we propose an innovative approach, named semantic-aware multimodal ...

    Can’t find what you’re looking for?

    Help us improve DuckDuckGo searches with your feedback

Custom date rangeX