Научная статья на тему 'USABILITY ASSESSMENT OF AGRICULTURAL MOBILE APPLICATIONS IN THE GREEK MARKET'

USABILITY ASSESSMENT OF AGRICULTURAL MOBILE APPLICATIONS IN THE GREEK MARKET Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
5
1
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Evaluation / expert analysis / mobile applications / usability / system design / agricultural information systems

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Demestichas Konstantinos, Adamopoulou Evgenia, Chasta Despoina, Costopoulou Constantina

Nowadays, we are witnesses to an unpreceded growth of infrastructure and technology, especially in the mobile services and applications domain. However, not all mobile applications prove suitable for meeting end-users anticipations. In order for an application to properly address the users’ needs, it has to feature usability or, in other words, be effective, efficient and satisfactory. The aim of this paper is to study agricultural mobile applications available for smartphones in the Greek market and conduct a thorough evaluation of their usability characteristics. The paper materials and methods employed involve the conduction of two separate usability evaluation surveys of five selected agricultural applications. The first one includes a heuristic analysis performed by three experts in design and human factors. The second one is a survey performed with end-users and specifically involves a user group of ten agricultural professionals, who were asked to rate the applications with respect to five different dimensions affecting usability. Both methods included properly composed questionnaires conforming to relevant international usability standards and best practices (such as the ten heuristics of Nielsen) and suitably selected Likert scales. Results show that some of the applications under consideration feature severe design and functionality flaws (e.g. with respect to system feedback and recovery from errors) that could limit their usefulness as well as their impact on farmers’ everyday activities. In conclusion, both of the employed methods acted in a complementary fashion and their joint utilization revealed design flaws that could be addressed and improved in subsequent application development.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «USABILITY ASSESSMENT OF AGRICULTURAL MOBILE APPLICATIONS IN THE GREEK MARKET»

DOI https://doi.org/10.18551/rjoas.2020-09.01

USABILITY ASSESSMENT OF AGRICULTURAL MOBILE APPLICATIONS

IN THE GREEK MARKET

Demestichas Konstantinos*, Adamopoulou Evgenia, Chasta Despoina,

Costopoulou Constantina

Department of Agricultural Economics and Rural Development, Agricultural University of Athens, Athens, Greece *E-mail: cdemest@aua.gr ORCID: 0000-0002-8858-6389

ABSTRACT

Nowadays, we are witnesses to an unpreceded growth of infrastructure and technology, especially in the mobile services and applications domain. However, not all mobile applications prove suitable for meeting end-users anticipations. In order for an application to properly address the users' needs, it has to feature usability or, in other words, be effective, efficient and satisfactory. The aim of this paper is to study agricultural mobile applications available for smartphones in the Greek market and conduct a thorough evaluation of their usability characteristics. The paper materials and methods employed involve the conduction of two separate usability evaluation surveys of five selected agricultural applications. The first one includes a heuristic analysis performed by three experts in design and human factors. The second one is a survey performed with end-users and specifically involves a user group of ten agricultural professionals, who were asked to rate the applications with respect to five different dimensions affecting usability. Both methods included properly composed questionnaires conforming to relevant international usability standards and best practices (such as the ten heuristics of Nielsen) and suitably selected Likert scales. Results show that some of the applications under consideration feature severe design and functionality flaws (e.g. with respect to system feedback and recovery from errors) that could limit their usefulness as well as their impact on farmers' everyday activities. In conclusion, both of the employed methods acted in a complementary fashion and their joint utilization revealed design flaws that could be addressed and improved in subsequent application development.

KEY WORDS

Evaluation, expert analysis, mobile applications, usability, system design, agricultural information systems.

In the last few years, we have witnessed and taken part in a revolution regarding the way we communicate with friends, co-workers, or even strangers for sharing and exchanging information on a wide variety of issues. The advent of technology relating to mobile communications has enabled us to be connected to the Internet always and everywhere, be it in our home, office, car or during our walks, exercises, outdoor activities, etc. Wide area coverage, combined with ultra-broadband speeds offered by 4G or 5G communication networks, reaching Gigabits per second (Al-Falahy and Anani, 2017) have enhanced our connectivity, rendering the experience almost equal to the experience of a corporate or home environment.

These significant advances regarding telecommunication technology and infrastructure have been coupled with similar tremendous technological breakthroughs relating to end-user devices, i.e. the mobile phones. The so-called "smartphones" feature increased capabilities (Cesere and Battaglia, 2015; Kim and Bahn, 2019) in terms of processing power, storage capacity, battery autonomy, connectivity options, multimedia support, screen resolution, and portability. The small size and weight, combined with the affordable cost have enabled mobile phones to ostracize devices with long history, such as cameras, video recorders, portable MP3 players, GPS navigation devices, etc.

The aforementioned enhanced capabilities of modern telecommunications networks in terms of speed, delay, availability and accessibility, together with the constantly increasing technical characteristics of mobile phones have enabled and actually driven a boost in the spectrum of services offered to end-users in a vast range of domains and application fields (De Reuver et al., 2016; Ehrenhard et al., 2017). Entertainment services, such as TV or video streaming, communication services such as Voice over IP or instant messaging, education services, online or off-line gaming, e-banking, e-commerce, social media, location-based services, are just a few of this broad range of services offered to the users through a single device. In their vast majority, these services are offered through relevant mobile applications offered by the corresponding institutions, e.g. content providers, banks, corporate organizations, commercial establishments, governments, local authorities, etc., and installed at the user's device through authorized distribution channels. Figure 1 demonstrates the astonishing increase of the market size of mobile applications over the last five years together with a projection for the next two upcoming ones.

Figure 1 - Market size of mobile applications Source: https://www.marketresearchfuture.com/reports/mobile-app-development-market-1752

(Accessed 10 Aug. 2020)

The popularity of mobile applications used in entertainment or professional contexts, powered by the simplicity of the development and distribution processes have resulted in the exponential growth of applications relating to every single aspect of every-day life, production or service domain. Agriculture and rural development domains, which are fundamental for the global economy, could not be an exception. A great number of mobile applications related to this area have been developed (Kumar et al. 2019; Mendes et al., 2020), covering a wide range of operations, such as provision of market information, tracking of products, logistics support, networking of domain stakeholders, etc. The users of these applications can be farmers, regional collaboratives, local authorities, governments, logistics providers, retailers, market owners, as well as consumers.

For example, mobile applications addressing farmers may include weather forecasting, identification of diseases and their treatment, GPS tracking, listing of agricultural goods, monitoring of agricultural drones, machine vision for crop assessment, analytics for production evaluation, real-time notifications, payment gateways, etc. (Krify, 2020). Figure 2 presents a classification per category of the 745 registered applications in one of the largest online portals of agricultural applications. The full integration of these applications, the role of which can far extend from the one of information provision, can entail significant economical and societal advantages, among which the creation of job positions, the provision of added value services and products, the reduction in delays or product waste, the increase of the competitiveness of developing countries, etc.

Gardening Machinery Education Technology Events News Pest Crops Spraying Buiness Weather Livestock

20 40 60 80 100 120 140 160

0

Figure 2 - Classification of available agricultural mobile apps per category Source: https://www.farms.com/agriculture-apps/ (Accessed 10 Aug. 2020)

The aim of the present paper is to assess the usability of five agricultural mobile applications available in the Greek market. The remainder of the paper is structured as follows: Section 2 presents the materials and methods of the present paper, focusing on the methodologies for the overall usability assessment of selected applications. Section 3 presents the results of the analysis of usability for the considered agricultural applications, as performed by experts on appropriately selected criteria as well as by real users. Section 4 discusses the correlation of the results of the two methods employed. Finally, Section 5 summarizes and concludes the paper.

MATERIALS AND METHODS OF RESEARCH

Agricultural mobile applications under study. In the context of the present paper, five (5) Greek agricultural mobile applications are studied and evaluated in terms of usability. These applications are described in Table 1.

Usability definition. Even if the technical specifications with respect to the desired features are in place, an application may still not achieve its final goal, which is to meet the target users' requirements, not only in terms of functionality but also regarding usability and effectiveness. Human Computer Interaction (HCI) principles (Shneiderman et al., 2016), as reflected through appropriate design rules and guidelines, should be followed during the design phase, in order to ensure usability of the developed applications. After, or ideally during this design phase, a thorough evaluation process should take place, with a view to detecting possible flaws in the design, identifying possible improvements or even completely pivoting the initial design approach.

The usability of an application is conducted on the basis of certain identified attributes and metrics, each one contributing in its own way to the overall evaluation of the application. The following non-exhaustive list features definitions of the most widely used such attributes (Dix et al., 2003; Alturki and Gay, 2019):

• Efficiency: This attribute refers to the resources consumed in relation to the accuracy and completeness of goals achieved;

• Effectiveness: This attribute relates to the accuracy and completeness with which specified users can achieve specified goals in certain environments and contexts;

• Satisfaction: This attribute refers to the acceptance of the system/design/application by its intended users;

• Learnability: This characteristic refers to the ease with which new users can interact effectively with the application towards achieving their goals;

• Memorability: This attribute refers to the ability of the users to efficiently interact with the system/application after they quit using it and then return.

RJOAS, 9(105), September 2020 Table 1 - Overview of studied agricultural applications

Category Functionality Coverage area Year of creation Cos

t Operating system

Crops Reporting of infected olive trees Greece 2018 Fre e Android

Olive Alarm - Supplier: Creative Web Applications

* t D A 4 f 14141 * 10 Í n |Û 4 »j№i 14:42 M * tf ^ □ £ £ fjK| MKt

* Olive Alarm * ZupnAqpucrj ov(K|»pàç I 4 tupnXripKi>cri av<K|»páí

AHQVTIIGTC OT(.Í¡ HCpQH.QTOJ

eptiuiioeLí

.1 imp unfyWv ipo ( iïE wop nlü :(¿vw cimiivd ïpiiiïj Fívíli vrppp G¿VTpn TIW npcspifETCu a^i (piíTcjp-io inç

¡IQ>.(GC

^ eíel napgt npnd&l to'tcltíiki" UCJ Til pSTC&LÉEI Lse is nasair|(hícei napcftom ■y aupriTiij^ûTû as &évrpa

£¡K.i¿í

Eivai pLD^o-viiró-!; o eAo^úvci;

H o TO ; /¿JEIÇ Tri?; s-A i ; * KÛPilNïlKH

O Ö

agrofarm - Supplier: Ilias Lambrou

Category

Functionality

Coverage area

Creating year

Cost

Operating system

Business

Management of agricultural

tasks, scheduling, cost tracking

Global

2014

Free

Android

I*

-agro

thirty .J Mmpmtfll ..

ImoTFMli 'trMmlwiiw dm

KH íb Crop

9 c™

„o-*/ Flown & ifr Croa

¿ta Mr GieenhotJ&e £ CTOP

CT^M

Gr&p«

Tpmalac?

EW. rt" V W»1^

mm

Usability evaluation methodologies and techniques. The aforementioned evaluation process of the usability metrics and attributes presented above is, typically, firstly conducted by experts, i.e. designers and human factors experts, ideally as part of the design process, in order to ensure that the application meets certain widely accepted usability criteria and design principles. The goal of this expert analysis is to provide feedback to the design and development teams regarding potential violation of certain cognitive principles or inconsistency with existing empirical results. The timely identification of such issues will result in enhanced quality of the final design, minimizing the risk for potential need for redesign, which would require extra monetary and human resources.

Despite, however, the proven effectiveness of expert evaluation techniques for refining the available design, these cannot obviously substitute the testing and evaluation by actual users of the application, i.e. persons that will indeed use the developed application to meet their needs. End-user evaluation is typically foreseen in the later stages of the development process, when at minimum a basic functional prototype is available. User evaluation can be conducted either in a controlled environment (e.g., laboratory) or in the actual place where the application is intended to be used (field studies). The users' feedback can be captured in one or more of the following ways:

• automatic means, e.g. through log files or proper analytics, mainly regarding certain usability criteria, such as the time to complete a task, the ability to recover from an error, etc.;

• observation during the use of the application (e.g., feeling observation, reaction observation);

• explicit documentation of the users' opinion, as received through questionnaires, structured or open interviews (Dix et al., 2003).

Several standardized evaluation frameworks have been established and used, with the view to assessing the application based on the usability evaluation criteria discussed above.

Heuristic evaluation is a technique suggested by J. Nielsen and R. Molich to evaluate a system or an application based on simple general principles or design guidelines known as heuristics (Nielsen and Molich, 1990; Nielsen, 1995; Joyce et al., 2016). Ten (10) main

heuristics, known as Nielsen's heuristics, are used in this approach, in order to identify the main usability problems (Nielsen, 2005). Nielsen's heuristics are presented in the following list:

• Visibility of system status;

• Match between system and the real world;

• User control and freedom;

• Consistency and standards;

• Error prevention;

• Recognition rather than recall;

• Flexibility and efficiency of use;

• Aesthetic and minimalist design;

• Help users recognize, diagnose and recover from errors;

• Help and documentation;

The evaluator is asked to assess each one of the aforementioned attributes on a 0-4 scale based on the conventions included in Table 2.

Table 2 - Rating scale of Nielsen's ten heuristics

Rating Meaning

0 I don't consider this to be a usability problem at all

1 This is just a cosmetic problem, which could be fixed only if resources are available

2 This is a minor usability problem and should be fixed at low priority

3 This is a major usability problem and should be fixed with low priority

4 This problem is catastrophic. It should be fixed prior to the system release

Besides, this component 11 of the international standard ISO 9241, entitled Ergonomic Requirements for Office Work with Visual Display Terminals (VDT)s, focuses on the specification and assessment of usability aspects of both hardware and software design (ISO, 2018). The standard identifies three main factors, contributing to system usability, namely:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

• effectiveness, which can be measured through the percentage of goals achieved, the number of power features used, the percentage of functions learnt, the percentage of errors corrected successfully, etc.;

• efficiency, which can be measured by the total time to complete a task, the relative performance of a user compared to an expert, the time required to learn a new function, the time needed to correct an error, etc.;

• satisfaction, which can be measured by using rating scales to assess the satisfaction with the presented features, the ease of learning, the ability to handle errors, etc.

Another popular evaluation tool is the Goal Question Metric approach, targeted at deriving usability metrics especially for mobile applications (briefly named m-GQM model) (Hussain, 2012; Hussain et al., 2014). In this model, thirty seven (37) different metrics are identified, in order to assess the usability of mobile phone applications, eighteen (18) of which are objective, while nineteen (19) are subjective. The evaluation metrics used by the m-GQM model are presented in Table 3.

The People At the Center of Mobile Application Development (PACMAD) usability model is another approach that was introduced in order to cover the deficiencies of existing models, especially regarding the evaluation of mobile applications, aiming to consider limitations but also capabilities of mobile devices (Harrison et al., 2013; Zahra et al., 2017). The PACMAD model defines three primary factors that affect the usability of an application, namely:

• the user that interacts with the application;

• the goal, which refers to the anticipated outcome;

• the context of use, referring to the physical and social environment within which the application is used, e.g. the interacting users, the tasks involved, the equipment used, etc.

Moreover, the PACMAD model defines seven (7) attributes of usability, some of which match the attributes of the models discussed above, namely Effectiveness, Efficiency, Satisfaction, Learnability, Memorability, Errors and Cognitive load.

Table 3 - Evaluation metrics used in the m-GQM model

Objective Metrics Subjective Metrics

Time taken to key-in the data Satisfaction with physical/virtual keypad

Number of errors while keying in the data Satisfaction with output

Time taken to install Satisfaction with the installation process

The number of interactions while installing the application Satisfaction with screen size optimization

Time taken to learn Satisfaction with help

The number of mistakes while learning Satisfaction with contents

Number of errors Enjoyment

Time taken to complete the task Satisfaction with interface

Number of tasks successful in the first attempt Safety while driving

Number of tasks successful in given time Easy to find help

Time taken to start the application Stress

Time taken to respond Satisfaction with signal indicator

Time taken to connect to the network Satisfaction with virtual joystick

Number of times voice assistance provided in a task Satisfaction while learning

Number of system resources displayed Satisfaction with text

Number of requests to update the application Satisfaction with system navigation

Percentage of battery used during installation Satisfaction with touch screen

Percentage of battery used Satisfaction with menu button

Satisfaction with voice assistance

An overview of the usability attributes of the aforementioned models, coupled with relevant research methodologies and usability evaluation techniques, is presented in Figure 3.

Figure 3 - Overview of usability models, metrics, methodologies and techniques

The System Usability Scale (SUS) is one of the most well-known and broadly used questionnaires in user experience research, presented to the user after the entire usability

testing session has been completed (Brooke, 1996). The SUS questionnaire consists of ten (10) Likert-scale questions, presented in

Figure 4, the wording of which should not be altered, in order for the results to be comparable to other relevant usability tests. The user's scores for each question are converted to a new number, added together and then multiplied by 2.5 to convert the original scores of 0-40 to 0-100. This final score, however, is not equivalent to a percentage score (Bangor et al., 2009). Several variations of the SUS questionnaire can be found in the literature, such as the two-factor "Learnability & Usability" and the two-factor "Positive & Negative Item" (Borsci et al., 2009; Lewis and Sauro, 2017).

1.1 think that 1 would like to use this system frequently.

1, Strongly Disagree 2. 3. 4. 5. Strongly Agree

o o o o o

2.1 found the system unnecessarily complex.

1. Strongly Disagree 2. 3. 4. 5. Strongly Agree

o o o o o

3.1 thought the system was easy to use.

1, Strongly Disagree 2. 3. 4. 5. Strongly Agree

o o o o o

A. I think that I would need the support of a technical person to be able to use this system.

1. Strongly Disagree 2. 3. 4. 5. Strongly Agree

O o o O o

5.1 found the various functions in this system were well integrated.

1. Strongly Disagree 2. 3. 4. 5. Strongly Agree

O O o O O

6.1 thought there was too much inconsistency in this system.

1, Strongly Disagree 2. 3. 4. 5, Strongly Agree

O O o O O

7.1 would imagine that most people would learn to use this system very quickly.

1-Strongly Disagree 2 3- 4 5- Strongly Agree

o o o O o

8.1 found the syslem very cumbersome to use.

1. Strongly Disagree 2. 3. 4. 5. Strongly Agree

o o o O o

9. I felt very confident using the system.

1. Strongly Disagree 2. 3. 4. 5. Strongly Agree

o o o O o

10. I needed to learn a lot of things before I could get going with this system.

1. Strongly Disagree 2. 3. 4. 5. Strongly Agree

O o o O o

Figure 4 - The SUS questionnaire (Source: www.nngroup.com) Lastly, the Questionnaire for User Interaction Satisfaction (QUIS) is a usability testing tool composed to asess the users' subjective satisfaction feeling regarding an interface (Assila and Ezzedine, 2016). The QUIS consists of a demographic questionnaire, an overall measure of satisfaction, and measures of user satisfaction in four specific aspects relevant to user interface, i.e. screen factors, terminology and system feedback, learning factors and system capabilities (Harper and Norman, 1993).

RESULTS OF STUDY

Expert usability evaluation. Initially, in the context of this research paper, the usability of the five applications under consideration was evaluated by three independent experts in the fields of design and HCI, following the heuristics evaluation technique. The experts were presented with ten (10) questions corresponding to Nielsen's 10 heuristics previously discussed. A 7-point Likert scale was used to capture their opinion. The questions issued to the experts, together with the answer scale that had to be completed for every question, are presented in Table 4. It is worth highlighting that the questions wording, compatible with Nielsen's guidelines, is proper for the expert analysis, and aims at detecting system design flaws, or incompatibility with standard cognitive principles or well-established best practices.

Table 4 - 7-point Likert scale used for expert analysis questionnaires

1. The user is constantly informed about the system status through appropriate feedback within reasonable time.

2. The language used is simple and familiar to the user and real-world conventions are followed.

3. The user is provided with the ability to cancel actions as well as re-do or un-do functionality.

4. There is consistency regarding the use of words, terminology and symbols throughout the application.

5. The application design prevents the occurrence of errors.

6. The application design minimizes the user's memory load by making objects, actions, and options visible. The user does not have to remember information from one part of the dialogue to another.

7. The application features flexibility to cater for both advanced and inexperienced users.

8. The application features aesthetic design, with good information flow that does not cause confusion to the user.

9. Error messages are be expressed in plain language, precisely indicate the problem, and constructively suggest a solution.

10. Help and documentation on the application are sufficient and concise focusing on the specific user tasks.

Entirely disagree Mostly disagree Somewhat disagree Neither disagree nor agree Somewhat agree Mostly agree Entirely agree

1 2 3 4 5 6 7

The scores of the experts for each heuristic evaluation question as well as the average calculated score are presented in Table 5.

An overview of the scores of the studied applications in each of the ten heuristics is presented in

Figure 5. This graphic overview facilitates comparison of the five applications and identification of the areas in which each application excels or features weak points.

As can be observed, the agrofarm application has achieved the highest score in all categories, with the exception of the one relevant to error prevention. The experts assessed that the users of this application were provided with data saving and deletion functionalities, support of navigation through the application using standard device buttons coupled with navigation shortcuts options, consistent terminology, sufficient yet minimum information regarding the functionalities of the application and comprehensive error messages and help guidelines.

On the other hand, the final place in the ranking of the applications in terms of usability, according to the experts' opinions, is not as obvious as the first place, since the Agronote and the Bee Manager applications, usually take turns in occupying the last and the second to last positions. As far as the Bee Manager application is concerned, the experts assessed that the system does not provide enough mechanisms for error prevention, navigation shortcuts

are not sufficient, instructions for using the different functionalities of the application are inadequate, interface design between different screens is inconsistent and support for storing and deleting data is insufficient. Lack of navigation shortcuts, inconsistency between screens, and absence of error messages were also the main weaknesses of the Agronote application as attested by the users.

Table 5 - Questions based on Nielsen's heuristics

Question Application Expert #1 Expert #2 Expert #3 Average score

The user is constantly informed about the system status through appropriate feedback within reasonable time Olive Alarm 4 5 5 4.66

agrofarm 5 5 7 5.66

Bee Files 4 5 5 4.66

Agronote 3 5 5 4.34

Bee Manager 4 5 5 4.66

The language used is simple and familiar to the user and real-world conventions are followed Olive Alarm 5 7 7 6.33

agrofarm 6 6 7 6.33

Bee Files 4 6 6 5.33

Agronote 4 6 6 5.33

Bee Manager 5 5 5 5

The user is provided with the ability to cancel actions as well as re-do or un-do functionality Olive Alarm 5 5 6 5.33

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

agrofarm 4 6 6 5.33

Bee Files 3 5 5 4.33

Agronote 5 5 6 5.33

Bee Manager 4 6 6 5.33

There is consistency regarding the use of words, terminology and symbols throughout the application Olive Alarm 4 6 6 5.33

agrofarm 5 5 7 5.66

Bee Files 5 5 5 5

Agronote 4 6 6 5.33

Bee Manager 4 5 5 4.66

The application design prevents the occurrence of errors Olive Alarm 4 6 6 5.33

agrofarm 3 5 5 4.33

Bee Files 4 5 5 4.66

Agronote 4 5 5 4.66

Bee Manager 3 4 4 3.66

The application design minimizes the user's memory load by making objects, actions, and options visible. The user does not have to remember information from one part of the dialogue to another Olive Alarm 4 4 4 4

agrofarm 4 5 5 4.66

Bee Files 2 2 4 2.66

Agronote 4 5 5 4.66

Bee Manager 3 3 4 3.33

The application features flexibility to cater for both advanced and inexperienced users Olive Alarm 3 3 4 3.33

agrofarm 3 3 6 4

Bee Files 1 4 5 3.33

Agronote 2 2 5 3

Bee Manager 1 1 5 2.33

The application features aesthetic design, with good information flow that does not cause confusion to the user Olive Alarm 3 5 5 4.33

agrofarm 5 7 7 6.33

Bee Files 4 5 6 5

Agronote 1 5 5 3.66

Bee Manager 2 4 4 3.33

Error messages are be expressed in plain language, precisely indicate the problem, and constructively suggest a solution Olive Alarm 4 4 5 4.33

agrofarm 4 5 5 4.66

Bee Files 4 4 4 4

Agronote 3 3 4 3.33

Bee Manager 3 4 5 4

Help and documentation on the application are sufficient and concise focusing on the specific user tasks Olive Alarm 4 5 5 4.33

agrofarm 4 6 6 5.33

Bee Files 4 5 5 4.66

Agronote 3 3 3 3

Bee Manager 3 3 5 3.66

o

(J I/) <D

> <

¿¡y <e / c0<

¿P

J?

X?

s

éf

s ✓

Olive Alarm agrofarm ■ Bee Files ■ Agronote ■ Bee Manager

Figure 5 - Experts' scores on the 10 Nielsen's heuristics

Olive Alarm

agrofarm

Bee Files

Agronote

Bee Manager

Help and documentation

Help users recognize, diagnose and recover from errors

Aesthetic and minimalist design

Visibility of system status 7

Flexibility and efficiency of use

Match between system and the real world

User control and freedom

Consistency and standards

Error prevention

Recognition rather than recall

Figure 6 - Radar diagram of experts' scoring

User usability evaluation. As discussed in Section 2, a usability evaluation process cannot be considered comprehensive, unless the actual end-users participate in the evaluation process. For this reason, in the context of the present research activities, ten (10)

6

persons from the agricultural domain, and, in particular, six (6) men and four (4) women aged between 25-35 years, engaged with all of the applications under consideration and were asked to complete the QUIS questionnaire with properly selected questions on screen functionality, terminology and system feedback, learning factors, system capabilities and application content. It is worth underlining that, according to a study performed by J. Nielsen and T. Landauer (Nielsen and Landauer, 1993), ten (10) user evaluators are enough in order to detect over 90% of the existing usability problems.

Part of the QUIS questionnaire on overall system satisfaction is presented in Table 6, whereas the questions issued to the users on the six factors impacting usability are presented in Table 7.

Table 6 - Part of the QUIS questionnaire on overall system satisfaction

Overall system satisfaction

1 2 3 4 5 6 7

Terrible Wonderful N/A

Frustrating Satisfying N/A

Dull Stimulating N/A

Difficult Easy N/A

Rigid Flexible N/A

Table 7 - QUIS questions on system usability

Screen

Screen design was helpful Never 1 2 3 4 5 6 7 Always N/A

Volume of information presented on screen Inadequate Adequate N/A

Structure of information presented on screen Chaotic Organized N/A

Sequence of screens Confusing Clear N/A

Next screen to appear Unpredictable Predictable N/A

Return to previous screen Impossible Easy N/A

Terminology and System Feedback

Position of messages on screen Inconsistent Consistent N/A

Prompts for input Confusing Clear N/A

System informs about its progress Never Always N/A

Actions lead to predictable outcomes Never Always N/A

Delays Terrible Acceptable N/A

Error messages Unhelpful Very helpful N/A

Learning

Learning to operate the application Difficult Easy N/A

Time needed to learn application operation A lot Little N/A

Functions are performed in a logical sequence Never Always N/A

Steps to perform a task follow a logical order Never Always N/A

Feedback upon task completion Unclear Clear N/A

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

System Capabilities

System speed Too slow Fast enough N/A

System reliability Never Always N/A

Functions - operations Unreliable Reliable N/A

Ease of operation depends on user experience Never Always N/A

Application Content

Content reliability Dissatisfying Satisfying

Content understanding Too difficult Too easy

Content usefulness Dissatisfying Satisfying

Comments

Please provide your comments on the strongest aspects of the application

Please provide your comments on the weakest aspects of the application

The results of the user usability evaluation process are presented and discussed in the present section. The first part of the QUIS questions aimed at defining the users' overall impression and satisfaction from the application. The average score of the users' opinion on the high-level characteristics of the application is presented in Figure 7.

Overall System Satisfaction Wonderful Satisfying Stimulating Easy

Flexible

3 t ? O o o

5 o o o A

A A A

" ' '

X X ^ X

/N

Terrible Disatisfying Dull Difficult Rigid

O Olive Alarm O agrofarm A Bee Files XAgronote □ Bee Manager

7 6 5 4 3 2 1 0

Figure 7 - Users' opinion on Overall System Satisfaction As observed from

Figure 7, the vast majority of the applications under consideration created positive feelings to the users participating in the usability evaluation, and collected scores above average. A negative series of feelings was observed only for the Agronote application, for which the users felt that it was difficult to work with, rigid in terms of operation, rather dull and dissatisfying. On the other hand, the application that scored the most was agrofarm, which in fact featured the top score in all of the characteristics considered.

As far as screen design is concerned, participants were asked to score the applications regarding the overall design, the layout of information presented, as well as the logical sequence and navigation between screens. The average scores for the aforementioned factors are presented in Figure 8.

Screen

Always Adequate Organized Clear Predictable Easy

o o

X

X

5

o

A

X

X

O O

O Olive Alarm O agrofarm A Bee Files XAgronote □ Bee Manager

Screen design Volume of Structure of Sequence of Next screen Return to

was helpful information on-screen screens to appear previous information screen

Never Inadequate Chaotic Confusing Unpredictable Impossible

Figure 8 - Users' opinion on Screen Design

As can been noticed in

Figure 8, the logical sequence of screens, the next screen to appear and the ability of returning to the previous screen, were raised as main weaknesses for most of the applications, since these were the questions were the applications gathered the lowest

score. On the other hand, the structure, volume and positioning of on-screen information seemed to satisfy the users, since almost all of the studied application scored well above average on these criteria. Of particular interest are the contradicting results concerning the Agronote application, which was the application that achieved the top score regarding navigation to the previous screen but the lowest one regarding the predictability of the next screen. Similarly, the same application achieved high score on the volume of on-screen information, but simultaneously the bottom scores regarding positioning and structure of this information.

In terms of terminology and system feedback, users scored the considered applications on the positioning of on-screen messages and prompts, the information on system progress, the predictability of actions' outcomes, delays and error messages. The average scores for these criteria are presented in

Figure 9.

Consistent

Terminology and System Feedback Clear Always Always Acceptable Very helpful

o o ° ^ > Error messages K

■ X

■ o fc 6 A 1 u X 1___

R X A ■ « i

Positioning of Input Information Predictability Delays on-screen prompts on system of outcomes messages nmnrocc ^

messages progress Inconsistent Confusing Never Never Terrible Unhelpful

O Olive Alarm O agrofarm A Bee Files XAgronote □ Bee Manager

Figure 9 - Users' opinion on Terminology and System Feedback

As can be deduced from

Figure 9, terminology and system feedback was the strong point of the Olive Alarm application, which scored the best or the second best points in all categories. On the other hand, the Bee Manager application has been reviewed as weak regarding system status visibility, predictability of outcomes of specific actions and usefulness of error messages. Particularly regarding this last criterion, it is noticed that all applications score below average, a fact that indicates that developers shall probably need to re-design the error messaging functionality, in order to better meet users' anticipation and needs.

As far as learning is supported, participants were asked to provide their opinion on how easy and fast they could learn to use the application, on whether there was a logical sequence between functions or steps for performing a specific task and whether they received proper feedback upon task completion. The averages scores given by the users to the applications for these factors are presented in

Figure 10.

Figure 10 indicates that the agrofarm application was the one that consistently scored high in all criteria, demonstrating its strength regarding the learnability attribute. On the contrary, Bee Manager failed to satisfy users on learning support, since the majority of them felt that learning the application was quite difficult and time consuming, while functions and task steps did not seem to follow a logical sequence that would further enable learnability. It should be noted, however, that all other applications scored above average in all factors considered.

Consequently, focus was given on the system capabilities, with the participants being asked to evaluate the system speed and reliability, the effectiveness of the involved application functions and operations, as well as the degree to which ease of operation depended on user experience. Results of this group of questions are presented in Figure 11.

Learning

Facv I ittlp Z

6 5 4 3 2 1 0

Figure 10 - Users' opinion on learning support

Easy Little Always Always Clear

o O

Hi a X O * c ■ f » ■ X 2

o ■ o A A

□ 1

Application operation Time needed to learn Logical sequence of Logical order of task steps Feedback upon task

learining functions completion

Difficult A lot Never Never Unclear

O Olive Alarm o agrofarm A Bee Files XAgronote □ Bee Manager

Fast enough

System Capabilities Always Reliable

Always

X

O O i i

o • X \ i 7

I Ï A O

System speed System reliability Funvtions -operations Ease of operations depends on user

Too slow Never Unreliable experience Never

O Olive Alarm o agrofarm A Bee Files XAgronote □ Bee Manager

Figure 11 - Users' opinion on System Capabilities

The Olive Alarm application appears to satisfy users the most regarding the system reliability and speed as well as the reliability of the involved functions and operations. On the other hand, Bee Manager appears to be weakest regarding speed and reliability, with average scores on reliability of functions and dependence of operation ease on user experience. As in the previous group of factors on learnability, all the other applications scored above average in all factors considered.

The final criterion which the participating users were asked to score was the application content itself, and, more specifically, its structure, reliability and usefulness. Their opinions on these attributes are presented in

Figure 12.

The results of these questions are somewhat contradictory indicating that probably the questions issued should be rephrased or further clarified, since the range of answers for the same application is quite broad. For example, users gave the highest score to Agronote for

its content reliability and, at the same time, the lowest score for its content understanding. Although, these are, of course, different attributes one cannot judge content that he/she does not understand as reliable.

Figure 13 provides another viewpoint of the results, enabling designers and developers to apprehend whether the application's performance in terms of usability is balanced in all aspects considered or there are certain weak points that need particular re-design efforts.

Satisfying

Application Content Too easy

Satisfying

X

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

♦ A O

■ 1 ■

1 X W A

A O

Content reliability Content understanding Content usefulness

Dissatisfying Too difficult Dissatisfying

-Olive Alarm

Figure 12 - Users' opinion on Application Content

— agrofarm

—Bee Files Screen Design 6.00

-'5.00,

-Agronote

Bee Manager

Application Content

♦ Olive Alarm O agrofarm A Bee Files XAgronote □ Bee Manager

Terninology & System Feed back

System Capabilities

Learning

Figure 13 - Radar diagram of users' scoring

As can be observed, the scores of the Olive Alarm application in all functional groups of questions are quite balanced and rather high in all categories. Balanced but relatively low, are also the scores for the Bee Manager application. On the contrary, agrofarm has been evaluated lower for system capabilities compared to the other criteria, Bee Files on terminology and feedback while the main weakness of Agronote is captured on screen design.

DISCUSSION OF RESULTS

The results of the present paper have already been thoroughly presented and analyzed in Section 3. This section adds to this discussion, by comparing the two methods followed. The evaluation process of an application's usability often consists of two discrete parts, i.e. the analysis of experts, usually performed closely to the development process, and the evaluation from users of the application, usually conducted when an at least a minimum functional prototype is available. In the case of the present analysis, the latest versions of the applications available through the corresponding marketplace were made available to both experts and users.

Experts were asked to provide their opinion on whether the applications satisfied well-known and widely used heuristics, namely the Nielsen's heuristics, with the goal to assess whether they were designed and functioning following well-established cognitive paradigms and design principles. On the other hand, users from the agricultural domain were asked to express their opinion on specific questions regarding certain aspects of the application, such as screen design, terminology, content, etc. The wording of the questions and the scale of answers provided were simple and intuitive, since they were introduced to non-experts.

As already underlined, these two surveys are not competitive nor contradictory, but rather complement one another with the ultimate goal to capture the overall usability attribute and provide valuable feedback to the design and development team. The heterogeneity of the questions used in the two surveys does not enable a direct, point-to-point comparison of the results and, of course, this was never meant to. However, by studying the acquired answers certain alignments or disagreements can be observed, as also found in other literature studies (Fu et al., 2002; Korhonen, 2010).

For instance, while the application Agronote scored the lowest on system status visibility according to the experts, the users gave the second highest score in the same criterion. Moreover, the same application was given the highest score from users on the helpfulness of error messages but was ranked as the worst in this area by experts. On the contrary, agrofarm received the highest score both from users and experts concerning learnability, indicating a significant strength of this application regarding this attribute.

CONCLUSION

In conclusion:

1. mobile applications have not only appeared in our life, but have been established as one of the most widespread means used for entertainment, information, communication and professional purposes;

2. millions of people worldwide re-assess their everyday practices and decide to introduce mobile applications either in a complementary or in an exclusive way for the purposes of their practices;

3. several domains have been impacted by this shift in aspects including, for example, commerce, banking, public administration, healthcare, and others;

4. the agricultural domain could not be an exception, with hundreds of mobile applications being developed and distributed in order to assist farmers, breeders, logistics operators, distributors, and consumers in issues related to agriculture;

5. however, in order for a mobile application to meet user demand and provide functionality that could actually assist the end users towards the achievement of their goals, these applications must be usable, in terms of effectiveness, efficiency and satisfaction.

The present paper studied five distinct mobile applications of the agricultural domain, available in the Greek market, and

1. conducted both an expert as well as an end-user usability testing for each of them;

2. certain criteria and scoring factors were selected based on standardized methodologies;

3. results of these two surveys were presented in a comprehensive and illustrative way, accompanied by comments on the conclusions that can be drawn and the re-design needs that arise;

4. the applications were compared according to their relative scores, with their strengths and weaknesses being pinpointed;

5. finally, a comparison of the results of the experts' and users' analysis was made, highlighting the sometimes contradictory but always complementary nature of these two types of usability assessment.

REFERENCES

1. Al-Falahy, N., & Alani, O. Y. (2017). Technologies for 5G networks: Challenges and opportunities. IT Professional, 19(1), 12-20.

2. Alturki, R., & Gay, V. (2019). Usability Attributes for Mobile Applications: A Systematic Review. In Recent Trends and Advances in Wireless and IoT-enabled Networks (pp. 5362). Springer, Cham.

3. Assila, A., & Ezzedine, H. (2016). Standardized usability questionnaires: Features and quality focus. Electronic Journal of Computer Science and Information Technology: eJCIST, 6(1).

4. Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies, 4(3), 114-123.

5. Borsci, S., Federici, S., & Lauriola, M. (2009). On the dimensionality of the System Usability Scale: a test of alternative measurement models. Cognitive processing, 10(3), 193-197.

6. Brooke, J. (1996). SUS: a "quick and dirty'usability. Usability evaluation in industry, 189.

7. Cecere, G., Corrocher, N., & Battaglia, R. D. (2015). Innovation and competition in the smartphone industry: Is there a dominant design?. Telecommunications Policy, 39(3-4), 162-175.

8. De Reuver, M., Nikou, S., & Bouwman, H. (2016). Domestication of smartphones and mobile applications: A quantitative mixed-method study. Mobile Media & Communication, 4(3), 347-370. Dix, A., Dix, A. J., Finlay, J., Abowd, G. D., & Beale, R. (2003). Humancomputer interaction. Pearson Education.

9. Ehrenhard, M., Wijnhoven, F., van den Broek, T., & Stagno, M. Z. (2017). Unlocking how start-ups create business value with mobile applications: Development of an App-enabled Business Innovation Cycle. Technological forecasting and social change, 115, 26-36.

10. Fu, L., Salvendy, G., & Turley, L. (2002). Effectiveness of user testing and heuristic evaluation as a function of performance classification. Behaviour & information technology, 21(2), 137-143.

11. Harper, B. D., & Norman, K. L. (1993, February). Improving user satisfaction: The questionnaire for user interaction satisfaction version 5.5. In Proceedings of the 1st Annual Mid-Atlantic Human Factors Conference (pp. 224-228).

12. Harrison, R., Flood, D., & Duce, D. (2013). Usability of mobile applications: literature review and rationale for a new usability model. Journal of Interaction Science, 1(1), 1.

13. Hussain, A. (2012). Metric based evaluation of mobile devices: Mobile goal question metric (mGQM) (Doctoral dissertation, Salford: University of Salford).

14. Hussain, A., Hashim, N. L., & Nordin, N. (2014, June). MGQM: Evaluation metric for mobile and human interaction. In International Conference on Human-Computer Interaction (pp. 42-47). Springer, Cham.

15. ISO 9241-11:2018 (2018). Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts.

16. Joyce, G., Lilley, M., Barker, T., & Jefferies, A. (2016). Mobile application usability: heuristic evaluation and evaluation of heuristics. In Advances in human factors, software, and systems engineering (pp. 77-86). Springer, Cham.

17. Kim, J., & Bahn, H. (2019). Analysis of smartphone I/O characteristics—Toward efficient swap in a smartphone. IEEE Access, 7, 129930-129941.

18. Korhonen, H. (2010, September). Comparison of playtesting and expert review methods in mobile game evaluation. In Proceedings of the 3rd International Conference on Fun and Games (pp. 18-27).

19. Krify, https://krify.co/agriculture-and-farming-app-development/, Acc. 8 August, 2020.

20. Kumar, S. A., & Karthikeyan, C. (2019). Status of Mobile Agricultural Apps in the Global Mobile Ecosystem. International Journal of Education and Development using Information and Communication Technology, 15(3), 63-74.

21. Lewis, J. J. R., & Sauro, J. (2017). Revisiting the Factor Structure of the System Usability Scale. Journal of Usability Studies, 12(4).

22. Mendes, J., Pinho, T. M., Neves dos Santos, F., Sousa, J. J., Peres, E., Boaventura-Cunha, J., Cunha, M., & Morais, R. (2020). Smartphone Applications Targeting Precision Agriculture Practices - A Systematic Review. Agronomy, 10(6), 855.

23. Nielsen, J., & Molich, R. (1990, March). Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 249-256).

24. Nielsen, J., & Landauer, T. K. (1993, May). A mathematical model of the finding of usability problems. In Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems (pp. 206-213).

25. Nielsen, J. (1995). How to conduct a heuristic evaluation. Nielsen Norman Group, 1, 1-8.

26. Nielsen, J. (2005). Ten usability heuristics.

27. Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N. (2016). Designing the user interface: strategies for effective human-computer interaction. Pearson.

28. Zahra, F., Hussain, A., & Mohd, H. (2017, October). Usability evaluation of mobile applications; where do we stand?. In AIP Conference Proceedings (Vol. 1891, No. 1, p. 020056). AIP Publishing LLC.

i Надоели баннеры? Вы всегда можете отключить рекламу.