Hubbry Logo
search
logo
2208889

Menu (computing)

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

A drop-down menu of file operations in a Microsoft Windows program

In user interface design, a menu is a list of options presented to the user.

[edit]
Pictorial menu for a digital camera

A user chooses an option from a menu by using an input device. Some input methods require linear navigation: the user must move a cursor or otherwise pass from one menu item to another until reaching the selection. On a computer terminal, a reverse video bar may serve as the cursor.

Touch user interfaces and menus that accept codes to select menu options without navigation are two examples of non-linear interfaces.

Some of the input devices used in menu interfaces are touchscreens, keyboards, mice, remote controls, and microphones. In a voice-activated system, such as interactive voice response, a microphone sends a recording of the user's voice to a speech recognition system, which translates it to a command.

Types of menus

[edit]
Text-based menu in an application program
Text-based menu (German) with selection by cursor keys or mouse

A computer using a command line interface may present a list of relevant commands with assigned short-cuts (digits, numbers or characters) on the screen. Entering the appropriate short-cut selects a menu item. A more sophisticated solution offers navigation using the cursor keys or the mouse (even in two dimensions; then the menu items appear or disappear similarly to the menus common in GUIs). The current selection is highlighted and can be activated by pressing the enter key.

A computer using a graphical user interface presents menus with a combination of text and symbols to represent choices. By clicking on one of the symbols or text, the operator is selecting the instruction that the symbol represents. A context menu is a menu in which the choices presented to the operator are automatically modified according to the current context in which the operator is working.

A common use of menus is to provide convenient access to various operations such as saving or opening a file, quitting a program, or manipulating data. Most widget toolkits provide some form of pull-down or pop-up menu. Pull-down menus are the type commonly used in menu bars (usually near the top of a window or screen), which are most often used for performing actions, whereas pop-up (or "fly-out") menus are more likely to be used for setting a value, and might appear anywhere in a window.

According to traditional human interface guidelines, menu names were always supposed to be verbs, such as "file", "edit" and so on.[1] This has been largely ignored in subsequent user interface developments. A single-word verb however is sometimes unclear, and so as to allow for multiple word menu names, the idea of a vertical menu was invented, as seen in NeXTSTEP.

Menus are now also seen in consumer electronics, starting with TV sets and VCRs that gained on-screen displays in the early 1990s, and extending into computer monitors and DVD players. Menus allow the control of settings like tint, brightness, contrast, bass and treble, and other functions such as channel memory and closed captioning. Other electronics with text-only displays can also have menus, anything from business telephone systems with digital telephones, to weather radios that can be set to respond only to specific weather warnings in a specific area. Other more recent electronics in the 2000s also have menus, such as digital media players.

[edit]
Menu and expanded submenu

Menus are sometimes hierarchically organized, allowing navigation through different levels of the menu structure. Selecting a menu entry with an arrow will expand it, showing a second menu (the submenu) with options related to the selected entry.

Usability of submenus has been criticized as difficult, because of the narrow height that must be crossed by the pointer. The steering law predicts that this movement will be slow, and any error in touching the boundaries of the parent menu entry will hide the submenu. Some techniques proposed to alleviate these errors are keeping the submenu open while moving the pointer in diagonal, and using mega menus designed to enhance scannability and categorization of its contents.[2][3] Negative user experience with submenus is referred to as "menu diving".[4]

Usage of attached ellipses

[edit]

In computer menu functions or buttons, an appended ellipsis ("…") means that upon selection, another dialog will follow, where the user can or must make a choice.[5] If the ellipsis is missing, the function will be executed upon selection.

  • "Save": the file will be overwritten without further input.
  • "Save as ...": in the following dialog, the user can, for example, select another location or file name or other file format.

Touchscreens

[edit]
Top-down menu on a printer

Displays with touchscreen functionality, e.g. modern cameras and printers, also have menus: these are not drop-down menus but buttons.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In computing, a menu is a user interface element—often graphical but also text-based—that presents a selectable list of options or commands to users, enabling efficient navigation, data access, and task execution within software applications.[1] These elements originated in early computing systems, with on-screen menus using key combinations appearing in defense applications during the 1960s.[2] Significant advancements occurred in the 1970s at Xerox PARC, where developers introduced pop-up menus that appeared at the cursor position via mouse presses in systems like Smalltalk, alongside early on-demand menu implementations in programs such as William Newman's Markup.[2] The 1981 Xerox Star workstation formalized the menu bar as a persistent row of labeled categories, typically positioned at the top of the screen, which influenced subsequent designs.[2] By 1983–1984, Apple's Lisa and Macintosh popularized pull-down menus from a single top-level menu bar, a design patented by William D. Atkinson that became a standard in personal computing GUIs.[2][3] Menus vary by type and context to optimize user interaction and screen real estate. Common types include drop-down menus, which expand on demand via clicks or hovers to reveal hierarchical options; context menus, triggered by right-clicking on objects to provide relevant, location-specific commands; and submenus or cascading menus, which nest additional lists indicated by arrows for deeper navigation.[4] Menu bars serve as primary organizers, listing top-level categories like File or Edit in applications, while toolbar menus integrate menu functionality into buttons for quicker access.[4] In web development, the HTML <menu> element functions as a semantic toolbar for commands, distinct from navigational lists like unordered lists.[5] Design guidelines emphasize limiting categories to around 10 and items per level to 25, using access keys for keyboard navigation, and disabling rather than hiding unavailable options to maintain predictability.[4] Beyond traditional desktops, menus have evolved for mobile and web contexts, such as hamburger icons representing collapsed navigation menus in responsive designs, ensuring adaptability across devices.[6] Early menu-driven interfaces, predating widespread GUIs, relied on text-based lists for command-line systems, highlighting menus' role in simplifying complex interactions from the outset of interactive computing.[7]

Definition and Fundamentals

Definition

In computing, a menu is a selectable list of options or commands presented to the user within a user interface, primarily to facilitate navigation, selection, or execution of actions in software applications or operating systems.[1] This structure allows users to interact with the system by choosing from predefined choices rather than entering commands manually, making it a fundamental element in both graphical user interfaces (GUIs) and command-line interfaces (CLIs). Menus exhibit key characteristics that define their functionality: they organize options either hierarchically (e.g., nested sub-options) or linearly (e.g., flat lists), and they can be static—featuring fixed, unchanging items—or dynamic, adapting based on context, user behavior, or system state, such as frequency of use or current application mode. Examples include numbered lists in CLIs for sequential selection or drop-down menus in GUIs that expand upon user interaction to reveal related choices.[1] These attributes enable menus to serve as versatile widgets for command exploration and invocation across diverse interactive environments. Unlike other user interface elements such as buttons, which trigger a single predefined action, or icons, which provide visual shortcuts for quick recognition and selection, menus group multiple related options into a cohesive, bounded set to promote structured decision-making. This grouping distinguishes menus by emphasizing choice presentation over isolated inputs, thereby supporting broader task completion without overwhelming the user with disparate controls.[1] The conceptual roots of menus lie in their role as a mechanism to reduce cognitive load in human-computer interaction, shifting the burden from recall of commands to recognition of visually or textually presented alternatives, which aligns with principles of guided user-system dialogue. By presenting options explicitly, menus minimize errors and learning curves, particularly for novice users, while allowing efficient access for experts through familiar structures.

Historical Development

The concept of menus in computing originated in the 1960s with text-based systems on mainframes and in defense applications, where command selectors presented users with numbered options or key combinations for selecting operations, serving as precursors to interactive interfaces on terminals.[2] These early implementations emphasized efficiency in batch processing environments, allowing non-expert users to navigate limited choices without memorizing commands. By the early 1970s, such menu-driven selectors became more prevalent in time-sharing systems, facilitating user interaction on character-based displays. A pivotal advancement occurred in 1968 when Douglas Engelbart's "Mother of All Demos" showcased the oN-Line System (NLS), introducing the mouse and on-screen pointer that laid foundational influence for later mouse-driven menu selection, though NLS itself relied more on keysets and direct manipulation than hierarchical menus.[8] This demonstration inspired subsequent innovations in pointing-based interfaces. In 1973, the Xerox Alto at PARC introduced the WIMP paradigm—windows, icons, menus, and pointer—including early pop-up menus that appeared at the cursor position for command selection, marking the shift toward graphical menu systems.[9] Building on this, Xerox PARC's 1974 Smalltalk environment refined pop-up menus with hierarchical structures, demonstrating them publicly in 1975 as a core element of dynamic user interfaces.[8] The 1980s saw menus popularized in commercial personal computers. The 1983 Apple Lisa featured the first pull-down menu bar at the top of the screen, complete with checkmarks, keyboard shortcuts, and disabled options, revolutionizing application navigation.[10] This design carried over to the 1984 Apple Macintosh, which adopted a menu-driven interface as standard, making graphical menus accessible to mainstream users.[10] Microsoft followed with Windows 1.0 in 1985, incorporating cascading menus below application title bars, which allowed nested sub-options and became a staple in productivity software.[8] In the 1990s, menu designs expanded beyond desktops. NeXTSTEP, released in 1988 and evolving through the decade, introduced vertical menus alongside its 3D-beveled aesthetics, providing space-efficient alternatives to horizontal bars in object-oriented environments.[8] Concurrently, menus integrated into consumer electronics; on-screen displays (OSD) for VCR programming emerged in the 1980s with Akai models, becoming widespread by the mid-1990s in TVs and VCRs for channel selection and timer settings. These adaptations brought menu concepts to non-computer devices, emphasizing simplicity for household use. From the 2000s onward, menus evolved with web and mobile platforms, emphasizing context-sensitive designs that adapt to user actions, such as right-click options in browsers or gesture-triggered lists in apps.[11] Open-source desktops like GNOME (from 1997) and KDE (from 1996) influenced this shift; GNOME's minimalist panels integrated searchable menus, while KDE's customizable Kickoff menu supported hierarchical and application-launching innovations, fostering community-driven refinements in Linux environments.[12] These developments prioritized adaptability across devices, from desktops to touchscreens.

Types of Menus

Command-Line Menus

Command-line menus are text-based interfaces embedded within command-line environments, presenting users with a structured list of selectable options to execute commands without requiring full syntax recall. These menus typically employ numbered or lettered lists for sequential or hierarchical choices, often linear in presentation to accommodate terminal constraints, with shortcuts like single digits or letters for quick selection. In early systems, such structures facilitated guided interaction on limited hardware, as seen in mainframe terminals where full-screen forms with menu options minimized errors through predefined selections.[13][14] Navigation in command-line menus relies exclusively on keyboard inputs, such as entering a number or letter to select an option, pressing Enter to confirm, or using arrow keys for scrolling in enhanced implementations; direct typing of commands is sometimes allowed as an alternative. For instance, the Unix fdisk utility launches an interactive mode displaying a menu of single-letter commands—such as 'p' to print the partition table, 'n' to add a new partition (followed by prompts for details), 'd' to delete a partition, and 'q' to quit—enabling step-by-step disk management via typed responses. Similarly, MS-DOS fdisk presents a numbered menu, including option 1 to create a DOS partition, 3 to delete a partition, and 4 to display information, with users inputting digits to proceed through submenus for tasks like setting active partitions.[15][16] These menus offer advantages particularly for expert users, providing efficient, concise interaction that outperforms free-form commands in speed while requiring minimal system resources, making them ideal for resource-constrained environments like mainframes in the 1960s–1970s and early PCs in the 1980s. Historically, they powered utilities on Unix-like systems and DOS, as well as menu-driven interfaces in text adventures created with tools like The Quill, where players selected actions from lists to advance narratives without parsing complex inputs. Integration with scripting enhances dynamism; for example, Bash's select construct generates runtime menus from arrays, allowing scripts to present context-dependent options like file choices derived from directory listings.[17][18][19][20] Despite these strengths, command-line menus pose limitations for novice users, lacking inherent visual cues or intuitive discovery, which can hinder learnability compared to graphical alternatives, and they offer no built-in hierarchy beyond text nesting unless augmented by tools like the dialog utility for pseudo-graphical elements in terminals. Their reliance on memorization or on-screen prompts without spatial organization further exacerbates accessibility issues in complex scenarios.[13][18]

Graphical Menus

Graphical menus represent a fundamental element of graphical user interfaces (GUIs), providing users with visual, interactive lists of options that facilitate navigation and command selection through pointing devices like mice or touch inputs. Unlike text-based systems, these menus leverage spatial arrangement, icons, and animations to enhance accessibility and efficiency, originating from early innovations in windowing systems that aimed to mimic physical desk interfaces. They are integral to WIMP (Windows, Icons, Menus, Pointer) paradigms, where menus serve as organized gateways to application functions, allowing users to scan and select options visually rather than recalling commands from memory. Key subtypes of graphical menus include pull-down menus, which appear as cascading lists triggered from a horizontal menu bar at the top of an application window; pop-up menus, which emerge contextually near the pointer position, often via right-click actions; pie menus, radial arrangements that fan out from a central point for rapid selection; and ribbon interfaces, which evolve traditional menus into tabbed, contextual toolbars combining icons and labels for workflow-oriented tasks. Pull-down menus, popularized in the Xerox Alto and later Apple Macintosh systems, enable hierarchical organization by dropping down from top-level categories like "File" or "Edit." Pop-up menus, also known as context menus, adapt to the selected object, such as offering copy-paste options in file explorers. Pie menus, introduced in research on efficient pointing interactions, reduce selection time by minimizing cursor travel through angular positioning. Ribbon menus, as implemented in Microsoft Office since 2007, integrate menu functions into customizable tabs to streamline complex operations like document formatting. Design elements in graphical menus emphasize clarity and adaptability, incorporating text labels—typically action-oriented verbs or nouns like "Save" or "View"—alongside icons to convey meaning at a glance, while dynamic features such as graying out unavailable options or highlighting hovered items provide real-time feedback. These elements ensure menus remain unobtrusive when inactive, expanding only on demand to preserve screen real estate. For instance, in applications like Microsoft Word, menu bars display persistent top-level options with sub-items revealed on click, blending textual and symbolic representations for broad usability. Context menus in operating system file explorers, such as Windows Explorer, similarly adjust contents based on file types, disabling irrelevant actions like "Rename" for read-only files. The evolution of graphical menus has progressed from static, text-heavy lists in 1970s systems like the Xerox Star to modern implementations featuring smooth animated transitions and adaptive layouts that respond to user context or device orientation. This shift, driven by advancements in display technology and interaction design, underscores their role in WIMP environments by enabling intuitive hierarchy without cluttering the interface. Animated expansions, for example, guide the eye during menu invocation, improving perceived responsiveness in software like Adobe Photoshop. Advantages of graphical menus lie in their support for visual scanning, which aids discoverability by allowing users to browse options spatially rather than linearly, and their capacity to embed hierarchies compactly, thus accommodating complex applications without overwhelming the user. Research highlights how these menus reduce cognitive load by leveraging familiarity from real-world analogs, such as restaurant menus, while maintaining efficiency in pointer-based selection. Submenus extend the functionality of primary menus by organizing options into nested levels, forming hierarchical structures that allow users to access related commands through progressive selection. In graphical user interfaces (GUIs), cascading submenus typically appear as pull-down or pop-up panels triggered by hovering over or clicking a parent menu item, such as in the common "File > Open > Recent Files" sequence found in desktop applications. This mechanic relies on tree-like hierarchies where each node represents a menu level, enabling efficient grouping of commands while maintaining a compact primary menu.[21][22] Design principles for submenus emphasize balancing hierarchy depth and breadth to optimize usability, with research indicating optimal performance at around 2 levels of depth, and deeper hierarchies increasing selection times and error rates due to the cognitive demands of traversing multiple layers.[23][24] Designers mitigate this by limiting nesting depth and incorporating visual aids like separators to divide menu sections logically and checkmarks to indicate selected states, such as toggled options in preference submenus. These elements help users parse hierarchies quickly without overwhelming the interface. Representative examples include nested folder structures in operating system file menus, where users navigate hierarchies like "Documents > Projects > 2025 Reports" to access files, mirroring the submenu logic in applications like Microsoft Windows Explorer. In e-commerce websites, mega-menus expand this concept by displaying thumbnail previews and categorized sub-options in a single large panel upon hover, reducing clicks for browsing product lines as seen in platforms like Amazon.[25][26] Deep hierarchies pose challenges like information overload, where users face cognitive strain from scanning numerous nested options, leading to slower task completion and higher frustration. Solutions include breadcrumb trails, which provide a persistent path summary (e.g., Home > Category > Subcategory) to facilitate backtracking without re-traversing menus, and fisheye views that magnify focal items while compressing peripheral ones in hierarchical displays.[27][28] From a technical standpoint, event handling in submenus often incorporates short delays, such as 300-500 milliseconds, before revealing or closing panels to accommodate imprecise cursor movements and prevent accidental dismissals during navigation. This is implemented via mouse event listeners that monitor hover entry and exit, with adaptive activation areas expanding hit zones around submenu triggers to improve pointing accuracy in cascading designs.[29][30][31]

Indicators and Affordances

In graphical user interfaces, ellipses ("…") serve as a key visual indicator appended to menu items that require additional user input before execution, such as opening a dialog box for further choices. For instance, "Save As…" signals the need for file name and location specification, in contrast to immediate actions like "Save," which perform without further interaction. This convention enhances predictability by distinguishing deferred actions from instant ones.[32] Other common indicators include right-pointing arrows (►) to denote submenus, checkmarks (✓) or radio buttons (●) for toggleable states indicating selection or activation, and grayed-out text or dimmed icons for disabled options that are unavailable under current conditions. These cues provide affordances—perceivable action possibilities—that guide users toward interactive elements, reducing cognitive load by implying hierarchy, status, and constraints without explicit instructions.[33] Design guidelines emphasize platform consistency to reinforce these affordances; Apple's Human Interface Guidelines recommend ellipses for commands needing separate interfaces and red tinting for destructive menu items, while Microsoft's Windows UX guidelines advocate ellipses for dialog-launching commands and arrows for cascading menus, with disabled states shown via graying to prevent erroneous selections. Such standards ensure menus imply interactivity intuitively across ecosystems. Historically, ellipses were standardized in the 1980s with early Macintosh Human Interface Guidelines, evolving from Xerox PARC influences to incorporate icons in modern designs for broader visual signaling. Examples include "Preferences…" in applications like macOS Finder, which opens a settings dialog, and warning icons (e.g., caution triangles) paired with items like "Delete Permanently" to alert users to irreversible actions.[32][4][34]

Interaction Methods

Users interact with menus through keyboard navigation, which enables efficient selection without relying on pointing devices. Arrow keys allow sequential traversal of menu items, typically moving up, down, left, or right to highlight options in linear or hierarchical structures.[35] Hotkeys, such as Alt+F to open the File menu in many graphical user interfaces, provide direct access to top-level menus by combining modifier keys with letter keys.[36] Mnemonics, indicated by underlined letters in menu items (e.g., underlining "S" in Save), allow users to jump to specific options by pressing the corresponding key after opening the menu, enhancing speed for familiar tasks.[32] Mouse and pointer-based interactions dominate graphical menus, where users click to open a menu and select items, often with submenus appearing adjacent to the parent. Hovering the pointer over a menu item can trigger submenu display without clicking, allowing fluid exploration in pull-down or cascading designs.[35] In pie menus, users initiate interaction by clicking to center the radial menu at the cursor position, then drag the pointer toward the desired slice for selection, which supports marking menu variants where gestures are recorded for repeated use.[37] Other input devices adapt menu interactions to specialized contexts, such as joysticks in gaming environments, where directional tilting navigates menu items in a grid or list, often combined with button presses for confirmation.[38] Stylus input on tablets enables precise tapping or dragging on menus, leveraging pressure sensitivity for additional control, as in radial menus activated by stylus proximity.[39] Voice-based selection supports linear traversal in audio menus, where users speak item numbers or keywords sequentially, contrasting non-linear approaches like direct voice indexing that bypass hierarchy for immediate command invocation.[40] Specific techniques enhance menu usability across devices, including reverse video cursors in terminal-based menus, where an inverted highlight bar indicates the current selectable item for low-resolution displays.[41] Diagonal movement allowances in submenu designs permit users to steer the pointer directly toward child options without strict linear paths, reducing errors in cascading layouts by extending activation zones.[42] Adaptations cater to user expertise and prevent mishaps, such as accelerators—keyboard shortcuts displayed next to menu items—that enable power users to bypass navigation entirely for frequent commands.[43] Timeout delays in hover interactions, typically 300–500 milliseconds, defer submenu activation to avoid triggering from accidental pointer overshoots, maintaining control during precise movements.[44]

Usability Principles

Usability principles in menu design emphasize ergonomic and cognitive efficiency to minimize user effort and error rates during navigation. Fitts' law, which models the time required to acquire a target as a function of its distance and width, highlights the advantages of edge-placed menus, where targets at screen boundaries have effectively infinite width, reducing movement time and improving pointing accuracy.[45] Similarly, the steering law extends this by predicting the time to navigate constrained paths, such as submenus, where movement time increases logarithmically with path length and inversely with amplitude, making deep or narrow submenu tunnels slower and more error-prone for target acquisition.[46] Common usability issues in menus include "menu diving" errors, where users accidentally slip off narrow submenu targets during hierarchical navigation, leading to selection failures and frustration, particularly in cascading dropdowns that demand precise cursor control.[47] Narrow targets exacerbate this by slowing navigation and increasing cognitive load, as users must adjust for precision. Solutions involve enlarging hit areas to broaden effective target widths, thereby reducing acquisition time per Fitts' law, or adopting mega-menus that display multiple levels simultaneously in a single, expansive overlay, minimizing path constraints and diving risks while supporting visual scanning.[48] Accessibility principles ensure menus support diverse users, including those relying on assistive technologies. Screen readers require ARIA roles like menu, menuitem, and menuitemcheckbox to convey structure and enable navigation via arrow keys, while keyboard-only support must allow full operability using Tab, Enter, Escape, and arrows without timing dependencies, aligning with WCAG 2.2 Success Criterion 2.1.1 (Keyboard).[49][50] High-contrast indicators, such as underlines or focus outlines meeting WCAG 1.4.11 (Non-text Contrast) ratio of at least 3:1, further aid visibility for low-vision users; WCAG 2.2 adds Success Criteria 2.4.12 (Focus Not Obscured (Minimum)), 2.4.13 (Enhanced), and 2.4.14 (Focus Appearance) to ensure keyboard focus indicators remain visible and distinguishable during menu navigation.[51][52] To evaluate menu usability, designers apply Hick's law, which states that choice reaction time increases logarithmically with the number of equally probable options, implying shorter menus reduce decision time and cognitive overload in item selection. A/B testing complements this by comparing menu variants—such as depth or layout changes—against metrics like task completion time and error rates, revealing performance differences in real-user contexts.[53][54] Best practices include limiting menu depth to 2-3 levels to avoid excessive steering demands and maintain findability, grouping related items logically to leverage Gestalt principles for faster scanning, and incorporating search functionality within large menus to bypass hierarchical browsing, thus accelerating access for broad option sets.[55][56]

Applications in Interfaces

Desktop and Traditional GUIs

In desktop and traditional graphical user interfaces (GUIs), menus serve as primary mechanisms for accessing commands and functions through pointer-based interactions, typically using a mouse or keyboard. These environments, prevalent in operating systems like Windows, macOS, and Linux distributions, rely on menus to organize complex software functionalities in a hierarchical, discoverable manner, adhering to established human-computer interaction (HCI) conventions. Menu bars represent a foundational element in desktop GUIs, appearing as horizontal strips at the top of application windows or the screen. In Microsoft Windows, the menu bar often includes standard categories such as File, Edit, View, and Help, providing access to common operations like saving documents or adjusting display settings; for instance, the taskbar's system tray integrates similar menu structures for quick actions. On macOS, the Apple menu, located in the upper-left corner of the screen, functions as a global menu bar entry point for system-wide controls, such as shutting down the computer or accessing preferences, while application-specific menu bars follow suit with verbs like File and Edit. These designs promote consistency across applications, reducing cognitive load for users familiar with the paradigm. Context menus, also known as right-click menus, enhance desktop navigation by offering context-sensitive options based on the selected item or location. In Windows Explorer, right-clicking a file or folder triggers a popup menu with actions like Copy, Delete, or Properties, adapting dynamically to the file type—for example, displaying "Open with" for executables. Similarly, in macOS Finder, context menus provide equivalent functionalities, such as Get Info or Move to Trash, ensuring relevance to the user's selection and supporting efficient file management. This adaptive nature aligns with graphical menu subtypes like popup menus, allowing users to perform targeted operations without navigating broader hierarchies. Desktop environments in Linux, such as KDE Plasma and GNOME, integrate system-wide menus for cohesive user experiences across applications. KDE's application launcher menu, accessible via a global menu bar or panel, organizes programs into categories like Graphics or Office, with customizable submenus for quick access. GNOME employs a top-level Activities Overview that incorporates menu-like structures for app switching and search, emphasizing keyboard-driven usability in traditional setups. An evolution of these menu paradigms is seen in ribbon interfaces, introduced in Microsoft Office 2007, where tabs replace traditional menu bars with grouped commands (e.g., Home tab for formatting tools), blending menu functionality with visual icons to streamline workflows in productivity software. Practical examples illustrate menu applications in desktop software. Web browsers like Google Chrome feature menu bars or hamburger icons that expand into bookmark management options, such as adding or organizing folders, facilitating user customization of navigation. In integrated development environments (IDEs) like Visual Studio, menus are highly customizable; users can rearrange items in the File or Tools menus or add extensions via the Extensions menu, supporting tailored workflows for coding tasks. Platform differences highlight variations in menu presentation and behavior. Windows favors cascading submenus that drop down sequentially from the menu bar, promoting depth in command organization but potentially increasing navigation steps.

Touch and Mobile Interfaces

In touch and mobile interfaces, menus are adapted to suit direct finger interaction and constrained screen sizes, often favoring bottom sheets and radial menus over traditional drop-downs to minimize occlusion and support intuitive gestures. Bottom sheets emerge from the bottom of the screen to display secondary options, such as sharing actions or settings, allowing users to maintain visibility of the main content while interacting.[57] Radial menus, by contrast, organize choices circularly around a touch point, enabling rapid selection through swipe gestures that align with natural thumb movements on handheld devices.[58] These designs draw from Fitts' Law, positioning all options at equal distances from the activation center to accelerate touch-based choices compared to linear lists.[58] Gesture triggers like long-press activate contextual menus, revealing relevant actions—such as edit or delete options—directly at the point of interest without navigating away from the current view.[59] In iOS, action sheets exemplify this by sliding up from the bottom to present task-specific choices or destructive confirmations, ensuring they remain accessible even on smaller screens.[60] Android's navigation drawers slide in from the edge to expose app-wide destinations, triggered by taps or swipes on a menu icon, which helps preserve precious display space in portrait mode.[61] Camera applications frequently employ swipe menus for mode selection, where horizontal gestures toggle between photo, video, or portrait capture, as seen in native iOS and Android camera interfaces for seamless, one-handed operation.[62] A key challenge in these interfaces is the "fat-finger" error, where imprecise touches lead to unintended selections due to overlapping targets or small sizes.[63] Solutions include enlarging touch targets to a minimum of 44x44 points in iOS, providing ample area for finger pads while adhering to ergonomic standards.[64] Similarly, guidelines recommend at least 1cm x 1cm targets to support accurate tapping, especially during motion or one-handed use.[63] Haptic feedback complements these by delivering subtle vibrations upon selection, simulating physical button presses to confirm actions and reduce perceived errors in menu interactions.[65] Menu designs have evolved significantly since the iPhone's 2007 introduction, which popularized bottom tab bars for visible navigation but gave way to hamburger icons—hidden side menus—for denser layouts in early 2010s apps. By the mid-2010s, these shifted toward full-screen overlays, which expand to cover the display for immersive access to options, triggered by edge swipes in modern apps like social media clients. On larger tablets, hybrid approaches integrate menus with multitouch and keyboard inputs; virtual or external keyboards facilitate text-based menu entry for precise navigation, while multitouch gestures enhance radial marking menus for tool selection in creative apps.[66] This multitouch integration allows simultaneous panning and menu invocation, optimizing the expanded canvas for productivity tasks.[66]

Emerging Interfaces

Voice menus have evolved from traditional Interactive Voice Response (IVR) systems, which use hierarchical audio prompts navigated via keypad or speech inputs, to more fluid speech-driven interfaces in smart assistants.[67][68] In IVR, users traverse tree-like structures by responding to numbered options or keywords, serving as a precursor to modern voice user interfaces (VUIs).[69] Amazon Alexa's VUI, for example, supports conversational hierarchies in skills where users can say "tell me more" to expand submenus, leveraging contextual speech recognition to maintain dialogue flow and reduce repetition.[70][71] This shift from rigid menu trees to natural language processing enhances efficiency in non-visual environments like telephony or home automation.[72] Gesture-based menus in augmented reality (AR) and virtual reality (VR) utilize hand-tracking and air gestures to enable spatial navigation of information hierarchies, bypassing physical screens or controllers. In Meta Quest VR headsets, hand tracking supports gestures such as pinching to select menu items and dragging to scroll through lists, allowing intuitive interaction in immersive 3D spaces.[73] Microsoft's HoloLens 2 employs hand rays for pointing at holographic elements, combined with gestures like air-tapping to activate submenus or blooming (opening the hand) to access primary navigation panels in AR applications.[74] A review of AR gesture techniques identifies common methods like pointing for item selection and multi-finger pinching for hierarchy expansion, which improve precision in object manipulation and menu traversal within mixed-reality environments.[75] AI-driven predictive menus adapt hierarchies dynamically to user context, using machine learning to anticipate selections and streamline access. These systems analyze factors like location, time, and past behavior to reorder or suggest options proactively, as in Google Gemini's context-aware interfaces that generate personalized navigation paths—for instance, the November 2025 integration in Google Maps enabling conversational, context-aware queries during navigation.[76][77] In voice assistants, AI enables adaptive UIs by predicting needs—such as surfacing relevant submenus based on ongoing conversations—reducing cognitive load through natural language processing and behavioral patterns.[78] This approach extends to broader applications, where menus evolve in real-time to hide complexity for novices while revealing advanced options for experienced users.[79] Wearable devices exemplify compact emerging menus, often incorporating radial or gesture-optimized layouts to fit constrained screens. On devices like the Apple Watch, circular or radial menu designs align with the wrist-worn form factor, using rotations via the Digital Crown or taps to navigate hierarchical complications and apps efficiently.[80] In automotive infotainment, voice commands facilitate hands-free menu traversal, enabling drivers to query sub-options for navigation, media, or settings through integrated assistants like Google Assistant or Alexa.[81] These systems process natural speech to drill into hierarchies, such as "play jazz playlist" to select from music submenus, prioritizing safety by minimizing visual attention.[82] Brain-computer interfaces (BCIs) represent a frontier for thought-selected menus, decoding neural signals to enable direct mental control over digital hierarchies. Neuralink's implantable BCI translates brain activity into device commands, allowing users with motor disabilities to select options in interfaces like cursors or lists without physical movement.[83] Synchron's endovascular BCI, inserted via blood vessels, has enabled a participant to navigate smart home menus mentally, such as choosing TV channels or toggling lights from a thought-based grid; in August 2025, a participant demonstrated full thought control of an iPad, including home screen navigation and app launching.[84][85] These technologies build on clinical trials for restoring autonomy, where imagined actions map to menu selections in real-time.[86] Developing these interfaces raises ethical considerations around accessibility, particularly ensuring equitable benefits without introducing biases or privacy risks. AI adaptive menus must mitigate algorithmic discrimination by incorporating diverse training data, while BCIs require safeguards for informed consent and data security to protect vulnerable users.[87] Accessibility guidelines emphasize maintaining user control in personalized systems, avoiding over-reliance on predictive features that could marginalize those with varying cognitive or sensory needs.[88] Overall, ethical design prioritizes inclusivity, with standards like WCAG informing non-visual menu adaptations to broaden usability.[89]

References

User Avatar
No comments yet.