Menu (computing)
View on WikipediaThis article needs additional citations for verification. (September 2021) |
In user interface design, a menu is a list of options presented to the user.
Navigation
[edit]
A user chooses an option from a menu by using an input device. Some input methods require linear navigation: the user must move a cursor or otherwise pass from one menu item to another until reaching the selection. On a computer terminal, a reverse video bar may serve as the cursor.
Touch user interfaces and menus that accept codes to select menu options without navigation are two examples of non-linear interfaces.
Some of the input devices used in menu interfaces are touchscreens, keyboards, mice, remote controls, and microphones. In a voice-activated system, such as interactive voice response, a microphone sends a recording of the user's voice to a speech recognition system, which translates it to a command.
Types of menus
[edit]

A computer using a command line interface may present a list of relevant commands with assigned short-cuts (digits, numbers or characters) on the screen. Entering the appropriate short-cut selects a menu item. A more sophisticated solution offers navigation using the cursor keys or the mouse (even in two dimensions; then the menu items appear or disappear similarly to the menus common in GUIs). The current selection is highlighted and can be activated by pressing the enter key.
A computer using a graphical user interface presents menus with a combination of text and symbols to represent choices. By clicking on one of the symbols or text, the operator is selecting the instruction that the symbol represents. A context menu is a menu in which the choices presented to the operator are automatically modified according to the current context in which the operator is working.
A common use of menus is to provide convenient access to various operations such as saving or opening a file, quitting a program, or manipulating data. Most widget toolkits provide some form of pull-down or pop-up menu. Pull-down menus are the type commonly used in menu bars (usually near the top of a window or screen), which are most often used for performing actions, whereas pop-up (or "fly-out") menus are more likely to be used for setting a value, and might appear anywhere in a window.
According to traditional human interface guidelines, menu names were always supposed to be verbs, such as "file", "edit" and so on.[1] This has been largely ignored in subsequent user interface developments. A single-word verb however is sometimes unclear, and so as to allow for multiple word menu names, the idea of a vertical menu was invented, as seen in NeXTSTEP.
Menus are now also seen in consumer electronics, starting with TV sets and VCRs that gained on-screen displays in the early 1990s, and extending into computer monitors and DVD players. Menus allow the control of settings like tint, brightness, contrast, bass and treble, and other functions such as channel memory and closed captioning. Other electronics with text-only displays can also have menus, anything from business telephone systems with digital telephones, to weather radios that can be set to respond only to specific weather warnings in a specific area. Other more recent electronics in the 2000s also have menus, such as digital media players.
Submenus
[edit]Menus are sometimes hierarchically organized, allowing navigation through different levels of the menu structure. Selecting a menu entry with an arrow will expand it, showing a second menu (the submenu) with options related to the selected entry.
Usability of submenus has been criticized as difficult, because of the narrow height that must be crossed by the pointer. The steering law predicts that this movement will be slow, and any error in touching the boundaries of the parent menu entry will hide the submenu. Some techniques proposed to alleviate these errors are keeping the submenu open while moving the pointer in diagonal, and using mega menus designed to enhance scannability and categorization of its contents.[2][3] Negative user experience with submenus is referred to as "menu diving".[4]
Usage of attached ellipses
[edit]In computer menu functions or buttons, an appended ellipsis ("…") means that upon selection, another dialog will follow, where the user can or must make a choice.[5] If the ellipsis is missing, the function will be executed upon selection.
- "Save": the file will be overwritten without further input.
- "Save as ...": in the following dialog, the user can, for example, select another location or file name or other file format.
Touchscreens
[edit]
Displays with touchscreen functionality, e.g. modern cameras and printers, also have menus: these are not drop-down menus but buttons.
See also
[edit]References
[edit]- ^ Apple Human Interface Guidelines – Menus
- ^ Jakob Nielsen. "Mega Drop-Down Navigation Menus Work Well".
- ^ Jakob Nielsen. "Mega-Menus Gone Wrong". Archived from the original on July 20, 2018.
- ^ Hennion, Antoine; Levaux, Christophe (May 3, 2021). Rethinking Music through Science and Technology Studies. Routledge. p. 178. ISBN 978-1-000-38195-5.
- ^ "Menus". Apple Developer Documentation. Retrieved August 19, 2025.
Menu (computing)
View on Grokipedia<menu> element functions as a semantic toolbar for commands, distinct from navigational lists like unordered lists.[5] Design guidelines emphasize limiting categories to around 10 and items per level to 25, using access keys for keyboard navigation, and disabling rather than hiding unavailable options to maintain predictability.[4]
Beyond traditional desktops, menus have evolved for mobile and web contexts, such as hamburger icons representing collapsed navigation menus in responsive designs, ensuring adaptability across devices.[6] Early menu-driven interfaces, predating widespread GUIs, relied on text-based lists for command-line systems, highlighting menus' role in simplifying complex interactions from the outset of interactive computing.[7]
Definition and Fundamentals
Definition
In computing, a menu is a selectable list of options or commands presented to the user within a user interface, primarily to facilitate navigation, selection, or execution of actions in software applications or operating systems.[1] This structure allows users to interact with the system by choosing from predefined choices rather than entering commands manually, making it a fundamental element in both graphical user interfaces (GUIs) and command-line interfaces (CLIs). Menus exhibit key characteristics that define their functionality: they organize options either hierarchically (e.g., nested sub-options) or linearly (e.g., flat lists), and they can be static—featuring fixed, unchanging items—or dynamic, adapting based on context, user behavior, or system state, such as frequency of use or current application mode. Examples include numbered lists in CLIs for sequential selection or drop-down menus in GUIs that expand upon user interaction to reveal related choices.[1] These attributes enable menus to serve as versatile widgets for command exploration and invocation across diverse interactive environments. Unlike other user interface elements such as buttons, which trigger a single predefined action, or icons, which provide visual shortcuts for quick recognition and selection, menus group multiple related options into a cohesive, bounded set to promote structured decision-making. This grouping distinguishes menus by emphasizing choice presentation over isolated inputs, thereby supporting broader task completion without overwhelming the user with disparate controls.[1] The conceptual roots of menus lie in their role as a mechanism to reduce cognitive load in human-computer interaction, shifting the burden from recall of commands to recognition of visually or textually presented alternatives, which aligns with principles of guided user-system dialogue. By presenting options explicitly, menus minimize errors and learning curves, particularly for novice users, while allowing efficient access for experts through familiar structures.Historical Development
The concept of menus in computing originated in the 1960s with text-based systems on mainframes and in defense applications, where command selectors presented users with numbered options or key combinations for selecting operations, serving as precursors to interactive interfaces on terminals.[2] These early implementations emphasized efficiency in batch processing environments, allowing non-expert users to navigate limited choices without memorizing commands. By the early 1970s, such menu-driven selectors became more prevalent in time-sharing systems, facilitating user interaction on character-based displays. A pivotal advancement occurred in 1968 when Douglas Engelbart's "Mother of All Demos" showcased the oN-Line System (NLS), introducing the mouse and on-screen pointer that laid foundational influence for later mouse-driven menu selection, though NLS itself relied more on keysets and direct manipulation than hierarchical menus.[8] This demonstration inspired subsequent innovations in pointing-based interfaces. In 1973, the Xerox Alto at PARC introduced the WIMP paradigm—windows, icons, menus, and pointer—including early pop-up menus that appeared at the cursor position for command selection, marking the shift toward graphical menu systems.[9] Building on this, Xerox PARC's 1974 Smalltalk environment refined pop-up menus with hierarchical structures, demonstrating them publicly in 1975 as a core element of dynamic user interfaces.[8] The 1980s saw menus popularized in commercial personal computers. The 1983 Apple Lisa featured the first pull-down menu bar at the top of the screen, complete with checkmarks, keyboard shortcuts, and disabled options, revolutionizing application navigation.[10] This design carried over to the 1984 Apple Macintosh, which adopted a menu-driven interface as standard, making graphical menus accessible to mainstream users.[10] Microsoft followed with Windows 1.0 in 1985, incorporating cascading menus below application title bars, which allowed nested sub-options and became a staple in productivity software.[8] In the 1990s, menu designs expanded beyond desktops. NeXTSTEP, released in 1988 and evolving through the decade, introduced vertical menus alongside its 3D-beveled aesthetics, providing space-efficient alternatives to horizontal bars in object-oriented environments.[8] Concurrently, menus integrated into consumer electronics; on-screen displays (OSD) for VCR programming emerged in the 1980s with Akai models, becoming widespread by the mid-1990s in TVs and VCRs for channel selection and timer settings. These adaptations brought menu concepts to non-computer devices, emphasizing simplicity for household use. From the 2000s onward, menus evolved with web and mobile platforms, emphasizing context-sensitive designs that adapt to user actions, such as right-click options in browsers or gesture-triggered lists in apps.[11] Open-source desktops like GNOME (from 1997) and KDE (from 1996) influenced this shift; GNOME's minimalist panels integrated searchable menus, while KDE's customizable Kickoff menu supported hierarchical and application-launching innovations, fostering community-driven refinements in Linux environments.[12] These developments prioritized adaptability across devices, from desktops to touchscreens.Types of Menus
Command-Line Menus
Command-line menus are text-based interfaces embedded within command-line environments, presenting users with a structured list of selectable options to execute commands without requiring full syntax recall. These menus typically employ numbered or lettered lists for sequential or hierarchical choices, often linear in presentation to accommodate terminal constraints, with shortcuts like single digits or letters for quick selection. In early systems, such structures facilitated guided interaction on limited hardware, as seen in mainframe terminals where full-screen forms with menu options minimized errors through predefined selections.[13][14] Navigation in command-line menus relies exclusively on keyboard inputs, such as entering a number or letter to select an option, pressing Enter to confirm, or using arrow keys for scrolling in enhanced implementations; direct typing of commands is sometimes allowed as an alternative. For instance, the Unix fdisk utility launches an interactive mode displaying a menu of single-letter commands—such as 'p' to print the partition table, 'n' to add a new partition (followed by prompts for details), 'd' to delete a partition, and 'q' to quit—enabling step-by-step disk management via typed responses. Similarly, MS-DOS fdisk presents a numbered menu, including option 1 to create a DOS partition, 3 to delete a partition, and 4 to display information, with users inputting digits to proceed through submenus for tasks like setting active partitions.[15][16] These menus offer advantages particularly for expert users, providing efficient, concise interaction that outperforms free-form commands in speed while requiring minimal system resources, making them ideal for resource-constrained environments like mainframes in the 1960s–1970s and early PCs in the 1980s. Historically, they powered utilities on Unix-like systems and DOS, as well as menu-driven interfaces in text adventures created with tools like The Quill, where players selected actions from lists to advance narratives without parsing complex inputs. Integration with scripting enhances dynamism; for example, Bash's select construct generates runtime menus from arrays, allowing scripts to present context-dependent options like file choices derived from directory listings.[17][18][19][20] Despite these strengths, command-line menus pose limitations for novice users, lacking inherent visual cues or intuitive discovery, which can hinder learnability compared to graphical alternatives, and they offer no built-in hierarchy beyond text nesting unless augmented by tools like the dialog utility for pseudo-graphical elements in terminals. Their reliance on memorization or on-screen prompts without spatial organization further exacerbates accessibility issues in complex scenarios.[13][18]Graphical Menus
Graphical menus represent a fundamental element of graphical user interfaces (GUIs), providing users with visual, interactive lists of options that facilitate navigation and command selection through pointing devices like mice or touch inputs. Unlike text-based systems, these menus leverage spatial arrangement, icons, and animations to enhance accessibility and efficiency, originating from early innovations in windowing systems that aimed to mimic physical desk interfaces. They are integral to WIMP (Windows, Icons, Menus, Pointer) paradigms, where menus serve as organized gateways to application functions, allowing users to scan and select options visually rather than recalling commands from memory. Key subtypes of graphical menus include pull-down menus, which appear as cascading lists triggered from a horizontal menu bar at the top of an application window; pop-up menus, which emerge contextually near the pointer position, often via right-click actions; pie menus, radial arrangements that fan out from a central point for rapid selection; and ribbon interfaces, which evolve traditional menus into tabbed, contextual toolbars combining icons and labels for workflow-oriented tasks. Pull-down menus, popularized in the Xerox Alto and later Apple Macintosh systems, enable hierarchical organization by dropping down from top-level categories like "File" or "Edit." Pop-up menus, also known as context menus, adapt to the selected object, such as offering copy-paste options in file explorers. Pie menus, introduced in research on efficient pointing interactions, reduce selection time by minimizing cursor travel through angular positioning. Ribbon menus, as implemented in Microsoft Office since 2007, integrate menu functions into customizable tabs to streamline complex operations like document formatting. Design elements in graphical menus emphasize clarity and adaptability, incorporating text labels—typically action-oriented verbs or nouns like "Save" or "View"—alongside icons to convey meaning at a glance, while dynamic features such as graying out unavailable options or highlighting hovered items provide real-time feedback. These elements ensure menus remain unobtrusive when inactive, expanding only on demand to preserve screen real estate. For instance, in applications like Microsoft Word, menu bars display persistent top-level options with sub-items revealed on click, blending textual and symbolic representations for broad usability. Context menus in operating system file explorers, such as Windows Explorer, similarly adjust contents based on file types, disabling irrelevant actions like "Rename" for read-only files. The evolution of graphical menus has progressed from static, text-heavy lists in 1970s systems like the Xerox Star to modern implementations featuring smooth animated transitions and adaptive layouts that respond to user context or device orientation. This shift, driven by advancements in display technology and interaction design, underscores their role in WIMP environments by enabling intuitive hierarchy without cluttering the interface. Animated expansions, for example, guide the eye during menu invocation, improving perceived responsiveness in software like Adobe Photoshop. Advantages of graphical menus lie in their support for visual scanning, which aids discoverability by allowing users to browse options spatially rather than linearly, and their capacity to embed hierarchies compactly, thus accommodating complex applications without overwhelming the user. Research highlights how these menus reduce cognitive load by leveraging familiarity from real-world analogs, such as restaurant menus, while maintaining efficiency in pointer-based selection.Menu Components
Submenus and Hierarchy
Submenus extend the functionality of primary menus by organizing options into nested levels, forming hierarchical structures that allow users to access related commands through progressive selection. In graphical user interfaces (GUIs), cascading submenus typically appear as pull-down or pop-up panels triggered by hovering over or clicking a parent menu item, such as in the common "File > Open > Recent Files" sequence found in desktop applications. This mechanic relies on tree-like hierarchies where each node represents a menu level, enabling efficient grouping of commands while maintaining a compact primary menu.[21][22] Design principles for submenus emphasize balancing hierarchy depth and breadth to optimize usability, with research indicating optimal performance at around 2 levels of depth, and deeper hierarchies increasing selection times and error rates due to the cognitive demands of traversing multiple layers.[23][24] Designers mitigate this by limiting nesting depth and incorporating visual aids like separators to divide menu sections logically and checkmarks to indicate selected states, such as toggled options in preference submenus. These elements help users parse hierarchies quickly without overwhelming the interface. Representative examples include nested folder structures in operating system file menus, where users navigate hierarchies like "Documents > Projects > 2025 Reports" to access files, mirroring the submenu logic in applications like Microsoft Windows Explorer. In e-commerce websites, mega-menus expand this concept by displaying thumbnail previews and categorized sub-options in a single large panel upon hover, reducing clicks for browsing product lines as seen in platforms like Amazon.[25][26] Deep hierarchies pose challenges like information overload, where users face cognitive strain from scanning numerous nested options, leading to slower task completion and higher frustration. Solutions include breadcrumb trails, which provide a persistent path summary (e.g., Home > Category > Subcategory) to facilitate backtracking without re-traversing menus, and fisheye views that magnify focal items while compressing peripheral ones in hierarchical displays.[27][28] From a technical standpoint, event handling in submenus often incorporates short delays, such as 300-500 milliseconds, before revealing or closing panels to accommodate imprecise cursor movements and prevent accidental dismissals during navigation. This is implemented via mouse event listeners that monitor hover entry and exit, with adaptive activation areas expanding hit zones around submenu triggers to improve pointing accuracy in cascading designs.[29][30][31]Indicators and Affordances
In graphical user interfaces, ellipses ("…") serve as a key visual indicator appended to menu items that require additional user input before execution, such as opening a dialog box for further choices. For instance, "Save As…" signals the need for file name and location specification, in contrast to immediate actions like "Save," which perform without further interaction. This convention enhances predictability by distinguishing deferred actions from instant ones.[32] Other common indicators include right-pointing arrows (►) to denote submenus, checkmarks (✓) or radio buttons (●) for toggleable states indicating selection or activation, and grayed-out text or dimmed icons for disabled options that are unavailable under current conditions. These cues provide affordances—perceivable action possibilities—that guide users toward interactive elements, reducing cognitive load by implying hierarchy, status, and constraints without explicit instructions.[33] Design guidelines emphasize platform consistency to reinforce these affordances; Apple's Human Interface Guidelines recommend ellipses for commands needing separate interfaces and red tinting for destructive menu items, while Microsoft's Windows UX guidelines advocate ellipses for dialog-launching commands and arrows for cascading menus, with disabled states shown via graying to prevent erroneous selections. Such standards ensure menus imply interactivity intuitively across ecosystems. Historically, ellipses were standardized in the 1980s with early Macintosh Human Interface Guidelines, evolving from Xerox PARC influences to incorporate icons in modern designs for broader visual signaling. Examples include "Preferences…" in applications like macOS Finder, which opens a settings dialog, and warning icons (e.g., caution triangles) paired with items like "Delete Permanently" to alert users to irreversible actions.[32][4][34]Navigation and Usability
Interaction Methods
Users interact with menus through keyboard navigation, which enables efficient selection without relying on pointing devices. Arrow keys allow sequential traversal of menu items, typically moving up, down, left, or right to highlight options in linear or hierarchical structures.[35] Hotkeys, such as Alt+F to open the File menu in many graphical user interfaces, provide direct access to top-level menus by combining modifier keys with letter keys.[36] Mnemonics, indicated by underlined letters in menu items (e.g., underlining "S" in Save), allow users to jump to specific options by pressing the corresponding key after opening the menu, enhancing speed for familiar tasks.[32] Mouse and pointer-based interactions dominate graphical menus, where users click to open a menu and select items, often with submenus appearing adjacent to the parent. Hovering the pointer over a menu item can trigger submenu display without clicking, allowing fluid exploration in pull-down or cascading designs.[35] In pie menus, users initiate interaction by clicking to center the radial menu at the cursor position, then drag the pointer toward the desired slice for selection, which supports marking menu variants where gestures are recorded for repeated use.[37] Other input devices adapt menu interactions to specialized contexts, such as joysticks in gaming environments, where directional tilting navigates menu items in a grid or list, often combined with button presses for confirmation.[38] Stylus input on tablets enables precise tapping or dragging on menus, leveraging pressure sensitivity for additional control, as in radial menus activated by stylus proximity.[39] Voice-based selection supports linear traversal in audio menus, where users speak item numbers or keywords sequentially, contrasting non-linear approaches like direct voice indexing that bypass hierarchy for immediate command invocation.[40] Specific techniques enhance menu usability across devices, including reverse video cursors in terminal-based menus, where an inverted highlight bar indicates the current selectable item for low-resolution displays.[41] Diagonal movement allowances in submenu designs permit users to steer the pointer directly toward child options without strict linear paths, reducing errors in cascading layouts by extending activation zones.[42] Adaptations cater to user expertise and prevent mishaps, such as accelerators—keyboard shortcuts displayed next to menu items—that enable power users to bypass navigation entirely for frequent commands.[43] Timeout delays in hover interactions, typically 300–500 milliseconds, defer submenu activation to avoid triggering from accidental pointer overshoots, maintaining control during precise movements.[44]Usability Principles
Usability principles in menu design emphasize ergonomic and cognitive efficiency to minimize user effort and error rates during navigation. Fitts' law, which models the time required to acquire a target as a function of its distance and width, highlights the advantages of edge-placed menus, where targets at screen boundaries have effectively infinite width, reducing movement time and improving pointing accuracy.[45] Similarly, the steering law extends this by predicting the time to navigate constrained paths, such as submenus, where movement time increases logarithmically with path length and inversely with amplitude, making deep or narrow submenu tunnels slower and more error-prone for target acquisition.[46] Common usability issues in menus include "menu diving" errors, where users accidentally slip off narrow submenu targets during hierarchical navigation, leading to selection failures and frustration, particularly in cascading dropdowns that demand precise cursor control.[47] Narrow targets exacerbate this by slowing navigation and increasing cognitive load, as users must adjust for precision. Solutions involve enlarging hit areas to broaden effective target widths, thereby reducing acquisition time per Fitts' law, or adopting mega-menus that display multiple levels simultaneously in a single, expansive overlay, minimizing path constraints and diving risks while supporting visual scanning.[48] Accessibility principles ensure menus support diverse users, including those relying on assistive technologies. Screen readers require ARIA roles likemenu, menuitem, and menuitemcheckbox to convey structure and enable navigation via arrow keys, while keyboard-only support must allow full operability using Tab, Enter, Escape, and arrows without timing dependencies, aligning with WCAG 2.2 Success Criterion 2.1.1 (Keyboard).[49][50] High-contrast indicators, such as underlines or focus outlines meeting WCAG 1.4.11 (Non-text Contrast) ratio of at least 3:1, further aid visibility for low-vision users; WCAG 2.2 adds Success Criteria 2.4.12 (Focus Not Obscured (Minimum)), 2.4.13 (Enhanced), and 2.4.14 (Focus Appearance) to ensure keyboard focus indicators remain visible and distinguishable during menu navigation.[51][52]
To evaluate menu usability, designers apply Hick's law, which states that choice reaction time increases logarithmically with the number of equally probable options, implying shorter menus reduce decision time and cognitive overload in item selection. A/B testing complements this by comparing menu variants—such as depth or layout changes—against metrics like task completion time and error rates, revealing performance differences in real-user contexts.[53][54]
Best practices include limiting menu depth to 2-3 levels to avoid excessive steering demands and maintain findability, grouping related items logically to leverage Gestalt principles for faster scanning, and incorporating search functionality within large menus to bypass hierarchical browsing, thus accelerating access for broad option sets.[55][56]