Hubbry Logo
logo
Graphical widget
Community hub

Graphical widget

logo
0 subscribers
Read side by side
from Wikipedia
gtk3-demo, a program to demonstrate the widgets in GTK+ version 3

In a graphical user interface (GUI), a graphical widget (also graphical control element or control) is an element of interaction, such as a button or a scroll bar. Controls are software components that a computer user interacts with through direct manipulation to read or edit information about an application. User interface libraries such as Windows Presentation Foundation, Qt, GTK, and Cocoa, contain a collection of controls and the logic to render these.[1]

Each widget facilitates a specific type of user-computer interaction, and appears as a visible part of the application's GUI as defined by the theme and rendered by the rendering engine. The theme makes all widgets adhere to a unified aesthetic design and creates a sense of overall cohesion. Some widgets support interaction with the user, for example labels, buttons, and check boxes. Others act as containers that group the widgets added to them, for example windows, panels, and tabs.

Structuring a user interface with widget toolkits allows developers to reuse code for similar tasks, and provides users with a common language for interaction, maintaining consistency throughout the whole information system.

Graphical user interface builders facilitate the authoring of GUIs in a WYSIWYG manner employing a user interface markup language. They automatically generate all the source code for a widget from general descriptions provided by the developer, usually through direct manipulation.

History

[edit]

Around 1920, widget entered American English, as a generic term for any useful device, particularly a product manufactured for sale; a gadget.

In 1988, the term widget is attested in the context of Project Athena and the X Window System. In An Overview of the X Toolkit by Joel McCormack and Paul Asente, it says:[2]

The toolkit provides a library of user-interface components ("widgets") like text labels, scroll bars, command buttons, and menus; enables programmers to write new widgets; and provides the glue to assemble widgets into a complete user interface.

The same year, in the manual X Toolkit Widgets - C Language X Interface by Ralph R. Swick and Terry Weissman, it says:[3]

In the X Toolkit, a widget is the combination of an X window or sub window and its associated input and output semantics.

Finally, still in the same year, Ralph R. Swick and Mark S. Ackerman explain where the term widget came from:[4]

We chose this term since all other common terms were overloaded with inappropriate connotations. We offer the observation to the skeptical, however, that the principal realization of a widget is its associated X window and the common initial letter is not un-useful.

Usage

[edit]
Example of enabled and disabled widgets; the frame at the bottom is disabled, they are grayed out.

Any widget displays an information arrangement changeable by the user, such as a window or a text box. The defining characteristic of a widget is to provide a single interaction point for the direct manipulation of a given kind of data. In other words, widgets are basic visual building blocks which, combined in an application, hold all the data processed by the application and the available interactions on this data.

GUI widgets are graphical elements used to build the human-machine-interface of a program. GUI widgets are implemented like software components. Widget toolkits and software frameworks, like e.g. GTK+ or Qt, contain them in software libraries so that programmers can use them to build GUIs for their programs.

A family of common reusable widgets has evolved for holding general information based on the Palo Alto Research Center Inc. research for the Xerox Alto User Interface. Various implementations of these generic widgets are often packaged together in widget toolkits, which programmers use to build graphical user interfaces (GUIs). Most operating systems include a set of ready-to-tailor widgets that a programmer can incorporate in an application, specifying how it is to behave.[5] Each type of widget generally is defined as a class by object-oriented programming (OOP). Therefore, many widgets are derived from class inheritance.

In the context of an application, a widget may be enabled or disabled at a given point in time. An enabled widget has the capacity to respond to events, such as keystrokes or mouse actions. A widget that cannot respond to such events is considered disabled. The appearance of a widget typically differs depending on whether it is enabled or disabled; when disabled, a widget may be drawn in a lighter color ("grayed out") or be obscured visually in some way. See the adjacent image for an example.

The benefit of disabling unavailable controls rather than hiding them entirely is that users are shown that the control exists but is currently unavailable (with the implication that changing some other control may make it available), instead of possibly leaving the user uncertain about where to find the control at all. On pop-up dialogues, buttons might appear greyed out shortly after appearance to prevent accidental clicking or inadvertent double-tapping.

Widgets are sometimes qualified as virtual to distinguish them from their physical counterparts, e.g. virtual buttons that can be clicked with a pointer, vs. physical buttons that can be pressed with a finger (such as those on a computer mouse).

A related (but different) concept is the desktop widget, a small specialized GUI application that provides some visual information and/or easy access to frequently used functions such as clocks, calendars, news aggregators, calculators and desktop notes. These kinds of widgets are hosted by a widget engine.

List of common generic widgets

[edit]
Various widgets shown in Ubuntu
Qt 'widgets rendered according to three different skins (artistic design): Plastik, Keramik, and Windows

Selection and display of collections

[edit]
  • Button – control which can be clicked upon to perform an action. An equivalent to a push-button as found on mechanical or electronic instruments.
    • Radio button – control which can be clicked upon to select one option from a selection of options, similar to selecting a radio station from a group of buttons dedicated to radio tuning. Radio buttons always appear in pairs or larger groups, and only one option in the group can be selected at a time; selecting a new item from the group's buttons also de-selects the previously selected button.
    • Check box – control which can be clicked upon to enable or disable an option. Also called a tick box. The box indicates an "on" or "off" state via a check mark/tick ☑ or a cross ☒. Can be shown in an intermediate state (shaded or with a dash) to indicate that various objects in a multiple selection have different values for the property represented by the check box. Multiple check boxes in a group may be selected, in contrast with radio buttons.
    • Toggle switch - Functionally similar to a check box. Can be toggled on and off, but unlike check boxes, this typically has an immediate effect.
    • Toggle Button - Functionally similar to a check box, works as a switch, though appears as a button. Can be toggled on and off.
    • Split button – control combining a button (typically invoking some default action) and a drop-down list with related, secondary actions
    • Cycle button - a button that cycles its content through two or more values, thus enabling selection of one from a group of items.
  • Slider – control with a handle that can be moved up and down (vertical slider) or right and left (horizontal slider) on a bar to select a value (or a range if two handles are present). The bar allows users to make adjustments to a value or process throughout a range of allowed values.
  • List box – a graphical control element that allows the user to select one or more items from a list contained within a static, multiple line text box.
  • Spinner – value input control which has small up and down buttons to step through a range of values
  • Drop-down list – A list of items from which to select. The list normally only displays items when a special button or indicator is clicked.
  • Menu – control with multiple actions which can be clicked upon to choose a selection to activate
    • Context menu – a type of menu whose contents depend on the context or state in effect when the menu is invoked
    • Pie menu – a circular context menu where selection depends on direction
  • Menu bar – a graphical control element which contains drop down menus
  • Toolbar – a graphical control element on which on-screen buttons, icons, menus, or other input or output elements are placed
    • Ribbon – a hybrid of menu and toolbar, displaying a large collection of commands in a visual layout through a tabbed interface.
  • Combo box (text box with attached menu or List box) – A combination of a single-line text box and a drop-down list or list box, allowing the user to either type a value directly into the control or choose from the list of existing options.
  • Icon – a quickly comprehensible symbol of a software tool, function, or a data file.
  • Tree view – a graphical control element that presents a hierarchical view of information
  • Grid view or datagrid – a spreadsheet-like tabular view of data that allows numbers or text to be entered in rows and columns.
[edit]
  • Link – Text with some kind of indicator (usually underlining and/or color) that indicates that clicking it will take one to another screen or page.
  • Tab – a graphical control element that allows multiple documents or panels to be contained within a single window
  • Scrollbar – a graphical control element by which continuous text, pictures, or any other content can be scrolled in a predetermined direction (up, down, left, or right)

Text/value input

[edit]
  • Text box – (edit field) - a graphical control element intended to enable the user to input text

Output

[edit]
  • Label – text used to describe another widget
  • Tooltip – informational window which appears when the mouse hovers over another control
  • Balloon help
  • Status bar – a graphical control element which poses an information area typically found at the window's bottom
  • Progress bar – a graphical control element used to visualize the progression of an extended computer operation, such as a download, file transfer, or installation
  • Infobar – a graphical control element used by many programs to display non-critical information to a user

Container

[edit]
  • Window – a graphical control element consisting of a visual area containing some of the graphical user interface elements of the program it belongs to
  • Collapsible panel – a panel that can compactly store content which is hidden or revealed by clicking the tab of the widget.
    • Drawer: Side sheets or surfaces containing supplementary content that may be anchored to, pulled out from, or pushed away beyond the left or right edge of the screen.[6]
  • Accordion – a vertically stacked list of items, such as labels or thumbnails where each item can be "expanded" to reveal the associated content
  • Modal window – a graphical control element subordinate to an application's main window which creates a mode where the main window can not be used.
  • Dialog box – a small window that communicates information to the user and prompts for a response
  • Palette window – also known as "Utility window" - a graphical control element which floats on top of all regular windows and offers ready access tools, commands or information for the current application
    • Inspector window – a type of dialog window that shows a list of the current attributes of a selected object and allows these parameters to be changed on the fly
  • Frame – a type of box within which a collection of graphical control elements can be grouped as a way to show relationships visually
  • Canvas – generic drawing element for representing graphical information
  • Cover Flow – an animated, three-dimensional element to visually flipping through snapshots of documents, website bookmarks, album artwork, or photographs.
  • Bubble Flow – an animated, two-dimensional element that allows users to browse and interact the entire tree view of a discussion thread.
  • Carousel (computing) – a graphical widget used to display visual cards in a way that's quick for users to browse, both on websites and on mobile apps

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A graphical widget, also known as a graphical control element or control, is a software component in a graphical user interface (GUI) that enables users to interact with digital applications or operating systems through visual elements, such as displaying information or responding to user inputs like clicks or drags.[1] These widgets form the building blocks of GUIs, allowing direct manipulation to read data, initiate actions, or navigate systems, and are typically arranged hierarchically within windows or frames for organized user experiences.[2] Common examples include buttons for triggering events, text fields for input, scroll bars for navigation, checkboxes for selections, sliders for value adjustments, and menus for options, all designed to promote intuitive and efficient human-computer interaction.[1][3] The concept of graphical widgets emerged in the 1970s at Xerox PARC, where the Alto computer introduced foundational elements like windows, icons, and scroll bars as part of the first fully functional GUI, influencing subsequent developments in personal computing.[4] This innovation built on earlier ideas, such as Douglas Engelbart's 1968 demonstration of the mouse and windows in the oN-Line System (NLS), which laid groundwork for widget-based interactions.[4] By the 1980s, commercial systems like Apple's Macintosh popularized widgets through standardized interfaces, making GUIs accessible beyond research labs and driving widespread adoption.[4] In modern computing, graphical widgets are implemented via widget toolkits or libraries, which provide reusable, object-oriented components to ensure consistency, code efficiency, and cross-platform compatibility in application development.[5] Prominent examples include GTK, an open-source toolkit used for creating cross-platform GUIs in applications like GNOME, and Qt, which supports complex interfaces in desktop and mobile software.[6] These toolkits handle widget hierarchies, event processing, and rendering, reducing development effort while maintaining platform-specific looks and behaviors.[5] Widgets continue to evolve with technologies like touch interfaces and web-based GUIs, emphasizing accessibility, responsiveness, and integration with diverse input methods.[2]

Fundamentals

Definition and Terminology

A graphical widget, also referred to as a control, is an element of a graphical user interface (GUI) that facilitates user interaction with an operating system or application or displays information to the user.[1] These elements are typically rectangular in shape and operate in an event-driven manner, responding to user inputs such as clicks or hovers.[7] In human-computer interaction terminology, "widget" and "control" are synonymous terms denoting these discrete, interactive GUI components, while "component" highlights their role as modular, reusable units within object-oriented programming frameworks.[1] Widgets differ from related concepts like icons, which are often static visual symbols representing applications or files but can also serve interactive functions such as clickable shortcuts, and windows, which function as top-level containers rather than individual interactive elements.[7] This distinction underscores widgets' focus on dynamic user engagement over mere representation or containment. The anatomy of a graphical widget includes structural features such as borders for visual separation, and in some cases, handles for manipulation like resizing, alongside behavioral states that indicate interactivity levels.[1] Common states encompass active (enabled and responsive to input), hovered (temporarily highlighted on mouse approach), and disabled (non-interactive and visually subdued).[8] Widgets are categorized as primitive or composite based on their structure and capabilities. Primitive widgets, such as buttons, are fundamental elements that do not manage child widgets and handle direct user interactions independently.[7] In contrast, composite widgets, like dialogs, serve as containers that incorporate and coordinate multiple child widgets, enabling complex assemblies.[9]

Key Characteristics

Graphical widgets exhibit interactivity as a core trait, enabling dynamic user engagement through event-handling mechanisms that process inputs like mouse clicks, keyboard presses, and touch gestures. These events are captured and dispatched by the underlying system, often using an observer pattern where widgets register listeners to respond appropriately, such as updating displays or triggering actions upon a button press. For instance, in Qt, widgets receive events via virtual handlers like mousePressEvent() and keyPressEvent(), ensuring responsive behavior across input modalities.[10] Similarly, GTK widgets emit signals for events such as button-press-event and key-press-event, facilitating propagation and custom handling.[11] Visual properties define the rendering and presentation of widgets, encompassing aspects like size, position, color schemes, and layout constraints to ensure coherent display within the interface. Size and position are typically managed relative to a parent container, with methods allowing adjustment—Qt's resize() sets dimensions while respecting minimumSize and maximumSize constraints, and move() positions the widget.[10] Color schemes and styling are applied through palettes or themes, promoting visual consistency. Layout constraints vary between fixed (non-resizable) and flexible (resizable based on content or user needs); GTK employs geometry management with get_preferred_width() and size_allocate() to enforce such constraints during rendering.[11] These properties collectively allow widgets to adapt to screen resolutions and user preferences without altering core functionality. State management governs the behavioral and visual transitions of widgets across conditions like normal, focused, pressed, and disabled, providing feedback on user interactions and system status. In the normal state, widgets appear in their default form with full interactivity; the focused state highlights keyboard or navigation selection, often via overlays or borders; the pressed state signals active input like a click, typically with a ripple or depression effect; and transitions between states ensure smooth animations for usability. Material Design specifies these states with emphasis levels—low for disabled (38% opacity), high for pressed (ripple overlay)—to maintain visual hierarchy and accessibility.[8] Qt tracks states through properties like setEnabled() and events such as changeEvent(), while GTK uses set_state_flags() for flags like sensitive or prelighted, enabling consistent state propagation.[10][11] Portability ensures widgets maintain consistent behavior and appearance across diverse platforms by abstracting platform-specific details through wrapper or emulated layers. Widget toolkits wrap native controls in a unified API, allowing code to run on Windows, macOS, or Linux without modification, though limited to shared features for native look-and-feel. Qt achieves this via platform-agnostic rendering with native widgets where possible, supporting Embedded Linux, macOS, Windows, and X11.[10] GTK similarly abstracts via GDK for cross-environment consistency, handling variations in event systems and drawing primitives. This abstraction reduces development effort while preserving performance, as seen in toolkits like wxWidgets that map to Motif on Unix and Win32 on Windows.[11][12] The hierarchical structure of widgets forms a tree of parent-child relationships, enabling nesting, layout composition, and efficient event propagation throughout the interface. A parent widget contains and manages children, dictating their relative positioning and resource allocation; child widgets inherit properties like focus policy from parents and are automatically disposed upon parent destruction. In Qt, parentWidget() defines this relation, with top-level widgets lacking parents to serve as windows, and methods like focusNextPrevChild() traversing the tree for input routing.[10] GTK enforces hierarchy via get_parent() and set_parent(), where size requests propagate upward and events bubble from children to ancestors through signals like hierarchy-changed. This model supports complex UIs by allowing events to cascade, such as a click on a child triggering parent-level updates.[11]

Historical Development

Origins in Early Computing

The origins of graphical widgets trace back to the 1960s, when early experiments in interactive computing laid the groundwork for visual user interface elements. In 1963, Ivan Sutherland developed Sketchpad during his PhD thesis at MIT's Lincoln Laboratory, utilizing the experimental TX-2 computer to create the first graphical user interface. This system enabled users to draw and manipulate geometric shapes interactively via a light pen, incorporating features like variable constraints and master-instance relationships that anticipated widget-like modularity and reusability in graphical design.[13] Advancing these concepts, Douglas Engelbart and his team at Stanford Research Institute unveiled the oN-Line System (NLS) in 1968, showcased in the landmark "Mother of All Demos" at the Fall Joint Computer Conference. NLS introduced the mouse as a pointing device for direct screen interaction, alongside graphical elements such as windows for organizing information, selectable on-screen buttons, and hypertext links, facilitating collaborative editing and navigation in a networked environment.[14] Xerox PARC accelerated widget development in the 1970s with the Alto computer, operational by April 1973, which pioneered a bit-mapped display and three-button mouse to support early GUI components. The Alto's interface included draggable windows, pull-down menus, and icon-based file representations, allowing users to perform operations like selecting and manipulating objects through pointing and clicking, thus establishing widgets as integral to personal computing.[15] A pivotal milestone arrived in 1981 with the Xerox Star workstation, the first commercially available system featuring a comprehensive widget set. It employed icons as visual metaphors for office items (e.g., folders and documents), overlapping windows for content viewing, and interactive forms like property sheets for editing attributes, providing a consistent framework for user actions such as dragging and menu selection.[16] This era's shift from command-line text interfaces to graphical ones was driven by bitmap displays, which permitted fine-grained pixel rendering for dynamic visuals, and pointing devices like the mouse, enabling intuitive spatial interaction over keyboard inputs.[17]

Evolution and Standardization

The commercialization of graphical widgets accelerated in the mid-1980s with the release of the Apple Macintosh in 1984, which popularized essential interface elements such as scrollbars, dialog boxes, buttons, and menus through its intuitive graphical user interface (GUI).[18] This system employed a desktop metaphor with icons, windows, and a one-button mouse, enabling point-and-click interactions that replaced command-line complexity and fostered widespread adoption among home, office, and educational users.[18] Apple's inclusion of a user interface toolbox further ensured a consistent look and feel across applications, standardizing features like undo, cut, copy, and paste.[18] Microsoft Windows 1.0, launched in 1985 as a GUI extension for MS-DOS, built on this momentum by incorporating scrollbars for content navigation, dialog boxes for user prompts, and window control widgets for resizing and moving tiled windows.[4] These elements drew from earlier innovations like those at Xerox PARC but were adapted for broader PC accessibility, promoting widget standardization in business and consumer software.[4] Early widget toolkits emerged in the 1980s to streamline GUI development. Apple's MacApp, introduced in 1985 as an object-oriented application framework, provided reusable libraries for creating UI components including windows, dialog boxes, scrollbars, and text views, integrated with the Macintosh Programmer's Workshop environment.[19] By MacApp 2.0 in 1988, it featured approximately 45 object classes supporting event handling, undoable commands, and a unified view system, reducing development time while enforcing Apple's human interface guidelines.[19] Concurrently, the OSF/Motif toolkit, developed in 1988 by the Open Software Foundation for the X Window System, layered high-level widgets atop Xlib to deliver standardized buttons, menus, and dialogs across Unix platforms.[20] Standardization efforts intensified in the 1990s through organizations like The Open Group, which defined the Common Desktop Environment (CDE) in 1995 as a unified GUI specification based on the Motif toolkit for X Window System implementations.[21] CDE encompassed window, session, and file management alongside widgets for email, calendars, and text editing, requiring conformance to ISO C APIs and promoting interoperability among vendors like HP, IBM, and Sun.[21] This framework, formalized as X/Open CAE in 1998, established widget behaviors and styles as industry benchmarks for open systems.[21] The rise of the web in the 1990s extended widgets to browser-based interfaces via HTML forms, introduced in HTML 2.0 (1995), which rendered interactive graphical controls like text inputs, checkboxes, radio buttons, and dropdown menus as client-side elements for data submission.[22] These form widgets shifted paradigms from desktop-centric to distributed GUIs, enabling dynamic web applications while maintaining cross-platform consistency through standardized rendering.[22] Mobile computing further transformed widgets in the 2000s with touch-based designs. Apple's iPhone, released in 2007, pioneered multitouch gestures for direct manipulation of on-screen elements like sliders and buttons, eliminating physical input devices and emphasizing gesture-driven navigation.[23] Android, with version 1.5 (Cupcake) released in 2009, introduced customizable home screen widgets for glanceable information and touch interactions, allowing users to resize and interact with dynamic UI components via capacitive screens.[24] These innovations prioritized fluidity and responsiveness, redefining widget paradigms for portable, finger-based computing.

Widget Classification

Selection and Display Widgets

Selection and display widgets enable users to interact with and visualize collections of predefined data options in graphical user interfaces, emphasizing selection mechanisms and structured presentation to enhance usability. These components support passive choices from ordered or hierarchical sets, distinguishing them from direct input methods by focusing on predefined alternatives. They incorporate features like scrolling, searching, and visual cues to manage large datasets efficiently, promoting intuitive navigation and decision-making in applications.[25] List boxes provide a visible, ordered list of selectable items, allowing single or multiple selections with built-in scrolling for datasets exceeding the visible area. They often integrate search capabilities, such as incremental filtering as users type, to quickly locate items in long lists. This widget is ideal for scenarios where displaying all options simultaneously aids comparison without overwhelming the interface.[26][27] Combo boxes merge a compact drop-down list with an optional editable text field, conserving screen space while permitting selection from an ordered set of items. Users activate the list via a button or key press, with support for single selection, keyboard navigation, and features like auto-completion to streamline choices. Non-editable variants enforce strict adherence to available options, whereas editable ones allow custom entries alongside list-based picks.[28][29] Tree views represent hierarchical data through expandable and collapsible nodes, forming a branching structure that reveals nested items on demand. Each node typically includes labels, icons, and lines connecting parent-child relationships, enabling users to traverse levels via mouse clicks or keyboard shortcuts. This design excels in displaying complex, nested information, such as directory structures, with states like expanded or collapsed providing at-a-glance overviews.[30] Tables, also known as grid views, arrange data in a multi-column, row-based format for tabular displays, supporting selection of cells, rows, or ranges alongside sorting and filtering operations. Headers allow clickable sorting by column, while filters narrow visible data based on criteria, facilitating analysis of structured datasets. Scrolling and resizing columns ensure adaptability to varying content volumes and user preferences.[31] In file selection dialogs, tree views depict folder hierarchies for navigation, paired with list boxes or tables to show file listings, where users select items amid scrolling and search tools for efficient browsing. Dropdown menus, realized through combo boxes, appear in configuration panels for option selection, such as choosing file types, with highlighting of the current choice offering immediate visual feedback on user actions. These implementations underscore the widgets' role in reducing cognitive load through familiar, interactive patterns.[32][33]

Input Widgets

Input widgets are graphical user interface elements designed to capture and manipulate user data through direct interaction, facilitating tasks such as entering text, selecting options, adjusting values, or specifying dates and times. These components translate user actions like typing, clicking, or dragging into programmatic inputs, often with built-in constraints to ensure data validity and usability. In modern GUI frameworks, input widgets adhere to platform standards for consistency, supporting accessibility features like keyboard navigation and screen reader compatibility.[34] Text fields and areas provide mechanisms for alphanumeric input, with single-line text fields suited for short entries like usernames or search terms, while multi-line text areas accommodate longer content such as messages or documents. Single-line fields, exemplified by Qt's QLineEdit, support input validation through masks (e.g., restricting to numeric values) and auto-completion based on predefined lists or user history.[35] Multi-line areas, like HTML's

What makes a great suggestion?

  • Specific beats broad — "CRISPR" over "Biology"
  • People, events, and breakthroughs are ideal
  • Search first to check if it already exists
User Avatar
No comments yet.