Friday, January 26, 2007

PAWS Meeting 2007-01-26

Talk by Sergey Sosnovsky
Flexi-OLM: Flexible Open Learner Modeling

Summary. 3 papers by Susan Bull presented at ITS conferences (2004, 2006). OLM - visualize UM and let negotiate and/or change it. Students learn by "reflecting" on the observable content of UM. Not all OLMs are for students, there are some for teachers. FlexiOLM - different views on the UM. Domain - C programming. Color-coding user knowledge level. Domain structure - topics with child concepts. Piece of interactive content is indexed with a single concept. Misconceptions are also modeled. Evidence propagation - aggregation of the means. Users can edit their models when: 1) new user comes, 2) learning occurs outside the system, or 3) system estimates knowledge incorrectly. Persuading LM - user expresses disagreement, system explains it's beliefs, and possibly allows user to prove his point. Negotiating LM - chatting with the system trying to align user's beliefs and system's beliefs. Chat-bot is controlled by a person. 350 negotiation pieces were pre-authored.
LM visualizations - hierarchy, lectures, concepts map, index ranked, summary.
Experiment 1. (2004) Is having different visualizations beneficial? Is there a preference for a particular visualization? 23 subjects. Users in general are positive about the system.
Experiment 2. (2006) What are student preferences regarding persuading, negotiating ad editing? 8 subjects (3rd year grad students). No significant patterns found.
Experiment 3. (2006) Chat-bot study. Liked the bot less, when it was not agreeing.

Discussion. There's no attempt to investigate gaming behaviors. 5-value likert scale is prone to over-inflating category "N/A". 4-scale is better sometimes. The study design is rather weak. No implicit hypotheses tested, only explicit (questionnaires). Choice of subjects is questionable (knowledgeable C-programmers and not currently learning students).

Friday, January 19, 2007

PAWS Meeting 2007-01-19

1st Talk by Joerg Brunstein

Topic: Eye movements as a window to the mind.

Summary:

the paper is mainly explaining the experiment on eye tracking.
they test on people in light, in dark, retell and describe the story after it's being told. Then measure their eye movements by looking at a whiteboard, contrast to the case of looking at the picture itself.

Discussion:
how does this experiment fit into our visualization interfaces design?
how do we do it differently in our studies?
possible solution: measure the eye movements from context and annotation, use the eye tracking result as a confirmation checking.



2nd Talk by Chirayu Wongchokprasitti

Tpoic: NewsMe: A Case Study for Adaptive News Systems with Open User Model (Preliminary Examination Rehearsal)

Summary:

NewsMe is a Personalized News Access System.
It allows users to provide feedbacks about theirs interests in news. Feedbacks are further used to construct the user model and recommend relevant news articles to users. It currently retrieves news from 82 RSS news feeds from 21 sources.

Discussion/Suggestion:
  • the blacklist tracking graphs are odd. Because only certain people were really using the function. The specific case could be raised separately. It might due to that users were not aware of the blacklist function. however, telling users the capabilities of the functions should be done before the experiments.
  • the precision on business topic is bit higher than other topics, why? why topic based?
  • if user have better knowledge on the topic, did they do better? (it wasn't mentioned/analyzed in the study)
  • implicit data is not reliable. the data is noisy. cos the question was asking for Japanese Car industry. people tend to recognize news title and may consider that news as relavant.
  • describe the problem and technology>describe the study,what do you want to explore more>demo the system>explain the experiment

Tuesday, January 16, 2007

PAWS Meeting 2007-01-12

1st talk by Sharon:

Ko-Kang Chu, Maiga Chang and Yen-The Hsia Designing a Course Recommendation System on Web based on the Students’ course Selection Records

A system recommends non-mandatory courses to students based on the model of their topic preferences. The model of preferences is populated using the history of non-mandatory courses a student took earlier.

Every course belongs to several topics (relations are weighted). The system classifies student interests and recommends her new courses similar to the interesting ones.

The results of the systems evaluation do not support the prior hypothesis that the accuracy of recommendation would go up with the time.

Discussion:

Main criticism to the paper was concerning the idea of topic-based modeling of interests for courses recommendation. If a student took 3 courses on AI the system should think that she must have a strong interest in AI, but the student would not need any more AI courses – she already knows everything she needs by that time.

2nd talk by Tomek:

Nikolaus Bee, Helmut Prendinger, Arturo Nakasone, Elisabeth Andre, and Mitsuru IshizukaAutoSelect: What You Want Is What You Get: Real-Time Processing of Visual Attention and Affect

The paper presents a model and an experiment on using eye-tracking and physiological data (galvanic skin effect, blood rate) for prediction of user’s preferences. They try to build a system that predicts when a subject makes up his mind towards one of two options on the basis of changes in the unconscious behavior caused by building up the preferences (relation between the attention and emotions).

In the experiment a user chooses between two actions (prefer one tie over another).

Discussion:

Criticism:

- The cognitive model reported in the paper is questionable. What if users were presented not ties but shoes or laptops? What if the choice should be among 5 or 100 alternative? What if the alternatives are not similar (not only ties)?

- The experiment is affecting subjects. The system makes a guess, what tie a user would like to buy – it influences the results. What if a user was not sure about her choice? What if a user tries to be nice? Or vice versa – tries to contradict the system?

There were more comments during the discussion. If anyone remembers, please add them here.

How we can apply models like this in our work?

Two directions:

1. Using Eye-tracking as a research tool for:

- Systems (Interface) evaluation

- Modeling information:

a. As a supplementary source if modeling info, for example to prove that the user is working on what we think he is working -> we model them correctly (Conati)

b. As one of the main sources of modeling (as in the discussed paper)

2. Using eye-tracking as a novel interface in real applications.