William Grosky
2018-12-03 00:09:38 UTC
====================================================================
CALL FOR PAPERS
SECOND INTERNATIONAL WORKSHOP ON MULTIMEDIA PRAGMATICS (MMPrag'18)
Co-Located with the IEEE SECOND INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR'19)
March 28-30, 2019 - San Jose, California
Website: http://mipr.sigappfr.org/19/
====================================IMPORTANT DATES====================================================================
NEW DATES(CONSIDER THIS):
Submissions due: January 25, 2019
Acceptance notification: February 1, 2019
Camera-ready: February 8, 2019
Workshop date: March 28, 2019
=======================================DESCRIPTION=====================================================================
Most multimedia objects are spatio-temporal simulacrums of the real world. This supports our view that the next grand challenge
for our community will be understanding and formally modeling the flow of life around us, over many modalities and scales. As
technology advances, the nature of these simulacrums will evolve as well, becoming more detailed and revealing to us more
information concerning the nature of reality.
Currently, IoT is the state-of-the-art organizational approach to construct complex representations of the flow of life around us.
Various, perhaps pervasive, sensors, working collectively, will broadcast to us representations of real events in real time. It
will be our task to continuously extract the semantics of these representations and possibly react to them by injecting some
response actions into the mix to ensure some desired outcome.
Pragmatics studies context and how it affects meaning, and context is usually culturally, socially, and historically based. For
example, pragmatics would encompass the speaker’s intent, body language, and penchant for sarcasm, as well as other signs, usually
culturally based, such as the speaker’s type of clothing, which could influence a statement’s meaning. Generic signal/sensor-based
retrieval should also use syntactical, semantic, and pragmatics-based approaches. If we are to understand and model the flow of
life around us, this will be a necessity.
Our community has successfully developed various approaches to decode the syntax and semantics of these artifacts. The development
of techniques that use contextual information is in its infancy, however. With the expansion of the data horizon, through the
ever-increasing use of metadata, we can certainly put all media on more equal footing.
The NLP community has its own set of approaches in semantics and pragmatics. Natural language is certainly an excellent exemplar of
multimedia, and the use of audio and text features has played a part in the development of our field.
After a successful first workshop in Miami, we intend to continue the tradition with the second workshop. Now is the perfect time
to continue to actively promote this cross-fertilization of our ideas to solve some very hard and important problems.
==========================================AREAS========================================================================
Authors are invited to submit regular papers (6 pages), short papers (4 pages), and demo papers (4 pages) at
https://easychair.org/conferences/?conf=mmprag19. The workshop website is mipr.sigappfr.org/19/.
Topics of interest include, but are not limited to:
- Affective computing
- Annotation techniques for images/videos/other sign-based modalities
- Computational semiotics
- Cross-cultural multi-modal recognition techniques
- Digital Humanities
- Distributional semantics
- Event modeling, recognition, and understanding
- Gesture recognition
- Human-machine multimodal interaction
- Integration of multimodal features
- Machine learning for multimodal interaction
- Multimodal analysis of human behavior
- Multimodal data modeling, datasets development, sensor fusion
- Multimodal deception detection
- Ontologies
- Sentiment analysis
- Structured semantic embeddings
- Word and feature embeddings - generation, semantic property discovery, corpus dependencies
sensitivity analysis, retrieval aids
To be included in the IEEE Xplore Library, accepted papers must be registered and presented.
========================================ORGANIZATION===================================================================
Chairs:
R. Chbeir, U of Pau, FR
W. Grosky, U Mich-D, US
Program Committee
W. Abd-Almageed, ISI, US
M. Abouelenien, U Mich-D, US
R. Agrawal, ITL, ERDC, US
A. Aizawa, NII, Japan
Y. Aloimonos, UMD, US
A. Belz, U of Brighton, UK
R. Bonacin, CTI, BR
J.L. Cardoso, CTI, BR
F. de Franca, UFABC, BR
J. Hirschberg, Columbia U, US
D. Hogg, U of Leeds, UK
A. Jadhav, IBM, US
C. Leung, HK Baptist U, HK
D. Martins, UFABC, BR
A. Pease, Infosys, US
J. Pustejovsky, Brandeis, US
T. Ruas, U Mich-D, US
V. Rubin, UWO, CA
S. Satoh, NII, Japan
A. Sheth, Wright St U, US
P Stanchev, Kettering U, US
J. Tekli, American U, LEB
_______________________________________________
Please do not post msgs that are not relevant to the database community at large. Go to www.cs.wisc.edu/dbworld for guidelines and posting forms.
To unsubscribe, go to https://lists.cs.wisc.edu/mailman/listinfo/dbworld
CALL FOR PAPERS
SECOND INTERNATIONAL WORKSHOP ON MULTIMEDIA PRAGMATICS (MMPrag'18)
Co-Located with the IEEE SECOND INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR'19)
March 28-30, 2019 - San Jose, California
Website: http://mipr.sigappfr.org/19/
====================================IMPORTANT DATES====================================================================
NEW DATES(CONSIDER THIS):
Submissions due: January 25, 2019
Acceptance notification: February 1, 2019
Camera-ready: February 8, 2019
Workshop date: March 28, 2019
=======================================DESCRIPTION=====================================================================
Most multimedia objects are spatio-temporal simulacrums of the real world. This supports our view that the next grand challenge
for our community will be understanding and formally modeling the flow of life around us, over many modalities and scales. As
technology advances, the nature of these simulacrums will evolve as well, becoming more detailed and revealing to us more
information concerning the nature of reality.
Currently, IoT is the state-of-the-art organizational approach to construct complex representations of the flow of life around us.
Various, perhaps pervasive, sensors, working collectively, will broadcast to us representations of real events in real time. It
will be our task to continuously extract the semantics of these representations and possibly react to them by injecting some
response actions into the mix to ensure some desired outcome.
Pragmatics studies context and how it affects meaning, and context is usually culturally, socially, and historically based. For
example, pragmatics would encompass the speaker’s intent, body language, and penchant for sarcasm, as well as other signs, usually
culturally based, such as the speaker’s type of clothing, which could influence a statement’s meaning. Generic signal/sensor-based
retrieval should also use syntactical, semantic, and pragmatics-based approaches. If we are to understand and model the flow of
life around us, this will be a necessity.
Our community has successfully developed various approaches to decode the syntax and semantics of these artifacts. The development
of techniques that use contextual information is in its infancy, however. With the expansion of the data horizon, through the
ever-increasing use of metadata, we can certainly put all media on more equal footing.
The NLP community has its own set of approaches in semantics and pragmatics. Natural language is certainly an excellent exemplar of
multimedia, and the use of audio and text features has played a part in the development of our field.
After a successful first workshop in Miami, we intend to continue the tradition with the second workshop. Now is the perfect time
to continue to actively promote this cross-fertilization of our ideas to solve some very hard and important problems.
==========================================AREAS========================================================================
Authors are invited to submit regular papers (6 pages), short papers (4 pages), and demo papers (4 pages) at
https://easychair.org/conferences/?conf=mmprag19. The workshop website is mipr.sigappfr.org/19/.
Topics of interest include, but are not limited to:
- Affective computing
- Annotation techniques for images/videos/other sign-based modalities
- Computational semiotics
- Cross-cultural multi-modal recognition techniques
- Digital Humanities
- Distributional semantics
- Event modeling, recognition, and understanding
- Gesture recognition
- Human-machine multimodal interaction
- Integration of multimodal features
- Machine learning for multimodal interaction
- Multimodal analysis of human behavior
- Multimodal data modeling, datasets development, sensor fusion
- Multimodal deception detection
- Ontologies
- Sentiment analysis
- Structured semantic embeddings
- Word and feature embeddings - generation, semantic property discovery, corpus dependencies
sensitivity analysis, retrieval aids
To be included in the IEEE Xplore Library, accepted papers must be registered and presented.
========================================ORGANIZATION===================================================================
Chairs:
R. Chbeir, U of Pau, FR
W. Grosky, U Mich-D, US
Program Committee
W. Abd-Almageed, ISI, US
M. Abouelenien, U Mich-D, US
R. Agrawal, ITL, ERDC, US
A. Aizawa, NII, Japan
Y. Aloimonos, UMD, US
A. Belz, U of Brighton, UK
R. Bonacin, CTI, BR
J.L. Cardoso, CTI, BR
F. de Franca, UFABC, BR
J. Hirschberg, Columbia U, US
D. Hogg, U of Leeds, UK
A. Jadhav, IBM, US
C. Leung, HK Baptist U, HK
D. Martins, UFABC, BR
A. Pease, Infosys, US
J. Pustejovsky, Brandeis, US
T. Ruas, U Mich-D, US
V. Rubin, UWO, CA
S. Satoh, NII, Japan
A. Sheth, Wright St U, US
P Stanchev, Kettering U, US
J. Tekli, American U, LEB
_______________________________________________
Please do not post msgs that are not relevant to the database community at large. Go to www.cs.wisc.edu/dbworld for guidelines and posting forms.
To unsubscribe, go to https://lists.cs.wisc.edu/mailman/listinfo/dbworld