Home Development .Net Create Windows app touch and for different form factor

Create Windows app touch and for different form factor

0
328

Today there are countless devices on the market equipped with a touchscreen, with sizes ranging from the smallest of the smartphones (from 5.1 inches downwards) to the “medium” ones of the so-called phablets (below 7 inches), growing on tablets (from 7 to 10.1 inches) and ending with Ultrabook and “All-In-One” devices, that combine the features of different devices within a single product, allowing for example to unhook the display from the laptop keyboard instantly transforming the touch screen in a tablet.

For the producers of software and operating systems, it is therefore essential to fully support the use of touch screens, first of all, but also to adapt to the wide variety of available and diversified screens as well as the intended use, as is the case. in “All-In-One” devices.

The heterogeneity of the “form factor” represents a difficult challenge for the developers, but also an interesting opportunity to be grasped if you want to create apps that are more and more on a human scale.

The Requirements

If you decide to take this path, the design of the app and in particular of its graphical interface must take into account some important requirements; I report a non-exhaustive list:

  • the right size of visual controls : the controls with which the user interacts (buttons, text boxes, checkboxes, menus, icons, etc.) must be large enough to be able to use them comfortably with the fingers, but not too much to be pacchiani; alternatively, a different arrangement of these controls must be provided depending on the size of the display, so as to provide an interface that is always suitable for the context;
  • multi-touch support : where it is intuitive to perform actions with several fingers, the app must be able to detect this condition and manage it in the right way;
  • provide immediate feedback : it is essential that the app provides an immediate visual or tactile response to the user’s actions; the same applies to the movements of the objects and to the operations of “pinch” and “zoom”, which must occur with the shortest possible latency time;
  • more peripherals of input and modality of use : above all in the case of “All-In-One” devices, it may be necessary to support multiple input devices, for example managing interactions with the mouse (when in a desktop configuration) as well as those by touch, or allow you to work with both simultaneously;
  • optimizing performance : as the app can be run on different devices, with heterogeneous hardware equipment, it is important to optimize the code so that the user’s user experience is not excessively penalized when the app is run on a device with resources limited, perhaps balancing the refinement of graphic effects and animations, if any, with an eye to energy saving.

Platforms, languages and tools

Another crucial factor for the success of your app is the choice of operating system and the platforms and devices it is able to achieve.

Microsoft Windows 8, for example, meets many of the requirements previously stated, providing a user interface that can be used comfortably either by mouse or touch, and is installed within many devices, especially on Ultrabook and systems “All -In-One”: can therefore be a suitable solution for the development of applications suitable for different displays and form factors.

When it comes to Windows 8, you can not fail to refer to the marketplace that is the preferred channel for publishing, distributing and purchasing applications: Windows Store.

The requirement to publish apps on the Windows Store is the adoption of languages compatible with all the platforms supported by the marketplace, for example, the standard HTML5 and JavaScript web languages. If you use Visual Studio 2013, you can create projects that share code and “assets” (resources, styles, strings, etc.) to generate apps that can run on any device that mounts a Windows operating system: the so-called Universal Apps. However, if you want to exploit your knowledge gained on the C # language using the .NET Framework and extend the compatibility to other systems in addition to Microsoft, such as iOS and Android, you can use Xamarin.

>> Download Educational Sample Code for Windows * 8

The importance of the User Experience

The term “User Experience” (UX) refers to the perception and subjective response of users in the use of a given product, system or service.

The design of user interfaces adaptable to different devices has now become an important activity for the success of an application, to the point that the demands of professional figures have increased to take care of this aspect, like the UX Designer, which performs the delicate task to identify the most effective solutions to provide the best possible experience: excessive latency in response to user actions, slower than a certain tolerable threshold, or difficulty in using a farraginous interface of the interface can make it unsustainable also the most interesting and feature-rich application.

For these reasons, it is important that the user interface is fast and responsive, without neglecting the consumption of the resources of the device (CPU, memory, etc.), which in the case of smartphones and tablets tend to be very limited, and the savings energy to conserve battery power.

Input management on different form factors

In order to face the problem of touch support to different form factors with the possibility of “degrading” towards alternative input devices such as the mouse, it is necessary to first make some clarifications and define the elements of the domain with which we are dealing.

First of all, there are various levels of interpretation of the input by the operating system, which we will see shortly.

Windows 8 natively manage these levels of abstraction, each of them suitable for a particular context of use dictated by the specific form factor of the reference device, and provides an API that allows developers to do the same in their applications.

Contact points (Pointer)

The API allows you to receive simple, low-level events, related to a generic “pointer” ( pointer ), ie an entity that contains information about the device to a single point of contact used with the screen, whether it is a finger, or a pen or mouse pointer click.

In the face of the contact, the system creates an object Pointer when it is detected; the object is then destroyed when the contact itself ceases.

In a “multi touch” scenario, each point in contact with the screen represents an individual and separate Pointer.

Gesture and manipulation with the touch

Pointer management provides the basis for the implementation of a more complex and articulated management of the most common gestures and manipulations. For example, you can intercept the so-called ” tap and hold “gesture, which occurs when you touch an interface element and leave your finger in contact with the screen, or when you scroll in a horizontal or vertical direction – called ” swipe “- to browse pages on the screen or perform similar actions.

When we talk about manipulations, instead, we refer to a particular user interaction with an interface element (UI) that naturally emulates a real manipulation (hence the term) of a physical element in reality; this happens, for example, when you place your fingers on a circular shape to make a rotation.

Finally, there is management of the bivalent input, that is to say that they work both with touch and with the click, in order to allow the user to use the mouse for basic operations, for example when the screen is connected to the keyboard in the devices “All-In-One”, and to switch to touch input on the screen by disconnecting the screen, for greater speed and convenience.

>>Read the development guide for “ultra-mobile” touch interfaces

The Windows 8 API

The information on the Pointer is conveyed by sending Windows messages (those identified by constants starting with WM_ *) to the active application window, or by generating a higher level event in the frameworks that support them, depending of the reference platform and the type of application:

  • WM_POINTER message: is supported starting with Windows 8 for traditional desktop applications and can be “captured” to handle both a single touch and a more complex gesture/manipulation;
  • WM_TOUCH message: is supported by Windows 7 (and on W8 for backward compatibility) and contains general information about an individual device contact with the screen;
  • WM_GESTURE message : it is supported by Windows 7 and 8, and contains information related to one or more contact points simultaneously constituting a gesture between those notes and natively supported by the API ( pinch and zoom , swipe , slide , etc.), to to which you can add any “custom” type gesture whose implementation is the responsibility of the developer based on the information extracted from the WM_TOUCH message;
  • PointerPoint object: it is supported in the Windows Store applications, regardless of the language used between those that support this platform, and is passed as a parameter from the events of the WinRT library that manage the input devices; the information conveyed is similar to that of previous messages.

Support for different form factor in the browser

Windows 8 is able to run applications based on Internet Explorer (version 11, at the time of writing this article) and implemented with the standard Web languages, ie HTML5, JavaScript and CSS3.

The execution environment also provides this type of app with all the necessary tools both for use in devices with screens of different sizes, and for managing the user interface through different input devices.

At the user interface level, the automatic adaptation to the screen size can be achieved by adopting one of the many available frameworks, such as Twitter Bootstrap, for example, to create websites and “responsive” applications, organizing the contents of each page of the application within a grid that can be resized or arranged differently depending on the space available on screen, following the rules of good responsive design .

The management of the standard input can instead be assigned to the classic client-side event handlers provided by the DOM, and to a specific API dedicated to (multi) touch support: the Pointer Events API. The specification is still in the standardization phase, but it can be used to manage any type of user interaction, from touch to gesture, with any input device. And if the use of API is complex, there are a plethora of libraries already ready to use, such as Touchy.js, to help us and simplify the development of the application.

User interfaces on multiple monitors

If you believe that supporting screens of different sizes is not enough to complicate things, we face the possibility, day by day, increasingly sought to support more than one screen (multi monitor).

The most widespread scenario of this mode is that of the dual screen, ie the ability of an application to be able to display contents on an additional (or secondary) monitor, as well as on the primary one.

Generally, an interface is shown on the primary screen to check what is displayed on the second monitor.

Get information on the screens

To support the “dual screen”, the Windows API and most of the frameworks provide developers with specific classes that allow you to get information on the number of monitors installed, their size and characteristics in general, in order to determine which of these is primary and allocate its contents to each device, positioning the windows in the correct location.

For example, the Windows Forms library provides the Screen class (in the “System.Windows.Forms” namespace) that contains the static AllScreens property : it provides an array of Screen objects , one for each available screen, with all related information. The PrimaryScreen property indicates if the screen represented by the object is the primary one, allowing a quick identification, while the WorkingArea property of all the Screen objects contains the screen working area cut from the desktop, which is represented as the virtual surface obtained from the union of all available screens.

The positioning of windows on the screen

Keeping in mind the concept of “working area” illustrated, the visualization of windows on a specific screen simply translates into the positioning of the same through the Location property in the Windows Forms applications, or the Left and Top properties of the Window object if using WPF, for a modern “look & feel” and a richer graphics interface, exploiting the potential of this library.

Here is an example of code that allows you to place the window of a WPF application in the secondary display and maximize it during the upload phase:

<pre class=”brush: php; html-script: true”>
private void OnLoaded (object sender, RoutedEventArgs routedEventArgs)
{
// Search for the screen set as secondary.
Screen secondary = null;
foreach (Screen screen in Screen.AllScreens)
{
if (! screen.Primary)
{
secondary = screen;
break;
}
}

// If the screen has not been found, it prevents the window from opening.
if (secondary == null)
{
MessageBox.Show (“The secondary monitor could not be found”);
this.Close ();
return;
}

// Change the position of the window so that it appears
// in the area relative to the secondary screen.
Top = secondary.WorkingArea.Top;
Left = secondary.WorkingArea.Left;

// Set the maximized status for the window.
WindowState = WindowState.Maximized;
}
</pre>

Manage full-screen mode

In the window display on the separate screen, it is often necessary to simulate the “full screen” mode, ie removing all unnecessary parts of the window (border, title bar and related buttons, etc.) and covering the whole the work area available without leaving unused spaces, limiting the possible distractions of the user.

Maximizing the window can lead to some “flaws”: for example, it often happens that the enlarged window slightly bends into the adjacent desktop area, or that there are empty spaces on the work area.

To avoid this problem, always make sure that the initial value of WindowStyle is set to “None” at the time of initialization, or in any case that its status is set to “Maximize” at a later time, ie only when it is already found on the secondary screen. To remove the borders instead, simply set the ResizeMode property to the value “NoResize” to inhibit the display of the classic handles for resizing.

Third-party hardware and libraries

In addition to the solutions already envisaged, it is possible to use third-party libraries that simplify the code by taking on the tedious operations linked to the recognition and management of additional screens, exploiting advanced features provided by the software coupled with hardware that supports specific protocols, such as with Intel WiDi Extensions SDK , a library for programming machines that support Intel WiDi technology , present in modern Ultrabook, which allows you to establish a WiFi connection to the screen of the device that can be separated at any time without interrupting playback.

Through this SDK, developers can integrate the technology into their applications, search for available screens nearby, establish a link and extend the user interface to the identified displays.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here