Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Object–action interface
Object–action interface, also abbreviated as OAI, is an extension to the graphical user interface, especially related to direct manipulation user interface and it can help to create better human–computer interfaces and increase the usability of a product.
This model focuses on the priority of the object over the actions (i.e. it emphasizes the object being selected first, and then any action performed on it. OAI adheres to this model.
The OAI model graphically represents the users' workplace using metaphors and let the users perform action(s) on the object. The sequence of work is to first select the object graphically (using mouse or other pointing device), and then performing an action on the selected object. The result/effect of the action is then shown graphically to the user. This way, the user is relieved from memory limitation, and syntactical complexity of the actions. Moreover, it emulates WYSIWYG. This feature of OAI lets the user control their sequence of action and visualize the effects at the runtime. If an action results in an undesired effect, the user simply reverses his sequence of actions.
In the action–object model, the computer is seen as a tool to perform various actions; whereas in the object–action model, the user derives a sense of control from directly performing actions on the object. The computer in this case is seen as a medium through which different tools are represented, which is isomorphic to interacting with objects in the real world.
Designing an OAI model starts with examining and understanding the tasks to be performed by the system. The domain of tasks include the universe of objects within which the user works to accomplish a certain goal as well as the domain of all possible actions performed by the user. Once these tasks objects and actions are agreed upon, the designer starts by creating an isomorphic representation of the corresponding interface objects and actions.
The figure above shows how the designer maps the objects of the user's world to metaphors and actions to plans. The interface actions are usually performed by pointing device or keyboard and hence have to be visual to the user so that the latter can decompose his plan into steps of actions such as pointing, clicking, dragging, etc.
This way DMUIs provide a snapshot of the real world situations and map the natural way of user's work sequence through the interface. This means that the users do not have to memorize the course of actions and it reduces the time required to familiarize themselves with the new model of work. Moreover, it reduces the memory load of the users significantly and therefore enhances the usability.
Tasks are composed of objects and actions at different levels. The positional hierarchy of any object and its related action may not be suitable for every user, but by being comprehensible they provide a great deal of usefulness.
Hub AI
Object–action interface AI simulator
(@Object–action interface_simulator)
Object–action interface
Object–action interface, also abbreviated as OAI, is an extension to the graphical user interface, especially related to direct manipulation user interface and it can help to create better human–computer interfaces and increase the usability of a product.
This model focuses on the priority of the object over the actions (i.e. it emphasizes the object being selected first, and then any action performed on it. OAI adheres to this model.
The OAI model graphically represents the users' workplace using metaphors and let the users perform action(s) on the object. The sequence of work is to first select the object graphically (using mouse or other pointing device), and then performing an action on the selected object. The result/effect of the action is then shown graphically to the user. This way, the user is relieved from memory limitation, and syntactical complexity of the actions. Moreover, it emulates WYSIWYG. This feature of OAI lets the user control their sequence of action and visualize the effects at the runtime. If an action results in an undesired effect, the user simply reverses his sequence of actions.
In the action–object model, the computer is seen as a tool to perform various actions; whereas in the object–action model, the user derives a sense of control from directly performing actions on the object. The computer in this case is seen as a medium through which different tools are represented, which is isomorphic to interacting with objects in the real world.
Designing an OAI model starts with examining and understanding the tasks to be performed by the system. The domain of tasks include the universe of objects within which the user works to accomplish a certain goal as well as the domain of all possible actions performed by the user. Once these tasks objects and actions are agreed upon, the designer starts by creating an isomorphic representation of the corresponding interface objects and actions.
The figure above shows how the designer maps the objects of the user's world to metaphors and actions to plans. The interface actions are usually performed by pointing device or keyboard and hence have to be visual to the user so that the latter can decompose his plan into steps of actions such as pointing, clicking, dragging, etc.
This way DMUIs provide a snapshot of the real world situations and map the natural way of user's work sequence through the interface. This means that the users do not have to memorize the course of actions and it reduces the time required to familiarize themselves with the new model of work. Moreover, it reduces the memory load of the users significantly and therefore enhances the usability.
Tasks are composed of objects and actions at different levels. The positional hierarchy of any object and its related action may not be suitable for every user, but by being comprehensible they provide a great deal of usefulness.