Imprecision farming? Examining the (in)accuracy and risks of digital agriculture

The myriad potential benefits of digital farming hinge on the promise of increased accuracy, which allows ‘doing more with less ’ through precise, data-driven operations. Yet, precision farming ’ s foundational claim of increased accuracy has hardly been the subject of comprehensive examination. Drawing on social science studies of big data, this article examines digital agriculture ’ s (in)accuracies and their repercussions. Based on an examination of the daily functioning of the various components of yield mapping, it finds that digital farming is often ‘pre-cisely inaccurate ’ , with the high volume and granularity of big data erroneously equated with high accuracy. The prevailing discourse of ‘ultra-precise ’ digital technologies ignores farmers ’ essential efforts in making these technologies more accurate, via calibration, corroboration and interpretation. We suggest that there is the danger of a ‘precision trap ’ . Namely, an exaggerated belief in the precision of big data that over time leads to an erosion of checks and balances (analogue data, farmer observation et cetera) on farms. The danger of ‘precision traps ’ increases with the opacity of algorithms, with shifts from real-time measurement and advice towards forecasting, and with farmers ’ increased remoteness from field operations. Furthermore, we identify an emerging ‘precision divide ’ : unequally distributed precision benefits resulting from the growing algorithmic divide between farmers focusing on staple crops, catered well by technological innovation on the one hand, and farmers cultivating other crops, who have to make do with much less advanced or applicable algorithms on the other. Consequently, for the latter farms digital farming may feel more like ‘imprecision farming ’ .


Introduction
Smart farming technologies' claimed potential to feed a growing global population with less, but much more precise, use of inputs is a potent narrative in an era of climate change, shrinking resources, and mounting agricultural pollution. The new digitalised model of farming, interchangeably called 'precision farming', 'digital farming' or 'smart farming', promises to increase efficiency and production through the use of data-driven precise or 'smart' inputs (EC 2019: 1; Shepherd et al., 2020;Fielke et al., 2020). Within-field precision of up to 'sub-inch' accuracy is claimed and notions of accuracy feature strongly in the branding of smart farming companies that call themselves Granular, Decisive Farming, Agroptima, or AgVerdict. The myriad potential benefits of smart farming largely hinge on the promise of increased accuracy, which would allow 'do [ing] more with less' through precise input applications (Hart, 2015;cf. EC 2017;Shepherd et al., 2020). Yet, precision farming's central claim of increased accuracy has hardly been the subject of comprehensive, critical examination in the burgeoning social science literature that engages with digital and smart farming. This article seeks to address this gap by asking two questions. How accurate are smart farming technologies if we look beyond the façade of the often hyped and celebratory language of its proponents? And what are the (potential) risks of overlooked inaccuracies?
As we will demonstrate, the actual functioning of digital precision farming technologies as encountered by farmers in the field is often less than precise. Interestingly, however, both proponents and critics of digital agriculture rarely question the assumed accuracy of digital farming technologies. In the policy-oriented, more techno-optimist literature on digital agriculture limits to accuracy are framed as either resulting from incomplete adoption and incorrect use by farmers, or from temporary technical 'bugs' that will rapidly be 'fixed'. Statements such as 'we are going to do it very, very precise. That is not that far away' (Heiniger, quoted in Hart, 2015: 1) or big data analysis will 'go from better to best to perfect to even better than that' (Mustatea, 2015: 3) are frequent. The message is that more sophisticated algorithms will eventually solve current inaccuracies, such as those resulting from integrating various big data sets of varying reliability (McIntosh, 2020: 44). Similar claims explicitly or implicitly feature widely in the scholarly literature on digital agriculture (e.g. Fielke et al., 2020;Shepherd et al., 2020). The exponential growth in both data volume and computing power, combined with improvements in hardware technologies, are seen as advancing digital agriculture steadily towards more superb accuracy and reliability, with subsequent high gains in efficiency and yields (e.g. Said-Rubio and Rovira-Más, 2020: 1).
Critical researchers and NGOs have identified various risks of digital agriculture, such as decreasing farmer autonomy, increased surveillance of farm workers and corporate capture of farm data (e.g. Carbonell, 2016;Fraser, 2019;Klauser, 2018; Global Network for the Right to Food and Nutrition, 2018;Schimpf and Diamond, 2020). These critical accounts, however, equally do not question the discourse of the sophisticated accuracy of algorithms. Instead, the focus of critique is on the dangers of supposedly 'uber-precise' algorithms, which will be able to monitor all aspects of our lives, while rapidly amassing mounting volumes of big data with high granularity in fewer hands (Carbonell, 2016;Bronson, 2019;Fraser, 2019;Klauser, 2018). The first Friends of the Earth position paper on digital farming, for instance, only contains half a sentence that vaguely hints at problems of accuracy. It states: 'We need to preserve the embedded, analogue knowledge that farmers hold, (…) not only for its own value but in case the technology fails for any reason' (Schimpf and Diamond, 2020: 10, own emphasis).
While the potential accuracies of digital farming technologies indeed present significant challenges around questions such as data sovereignty, unequal access, and mounting technology input costs and debt, we argue that equally (or even more) important risks of data-driven solutions originate in their inaccuracies and inbuilt limitations. The tech philosopher Maxim Februari even argues that the biggest risks for society do not originate from big data's accuracy, but from the stark, almost unquestioned belief in its accuracy coupled with its less-thanaccurate real functioning (Bos and Janssen, 2019;cf. McFarland and McFarland, 2015;Rankin 2020). This belief may lead to 'unreliable "evidence-based" decisions' (McArdle and Kitchin, 2016: 457), which can be 'potentially dangerous' when implemented on the wide scale that big data solutions enable (McArdle and Kitchin, 2016: 470). While we do not seek to understate the risks of algorithmic control and surveillance, we argue that the exclusive focus on these risks can blind researchers to other risks involvedsuch as those stemming from the unquestioned assumptions regarding accuracy, and misconceptions of what 'accuracy' and 'precision' actually meanand prevent critical scrutiny of new technologies.
In examining accuracy and precision, we draw on a subset of big data studies that critically engage with the claims of objectivity and accuracy associated with big data, artificial intelligence (AI), and algorithms (Boyd and Crawford, 2012;Ekbia et al., 2015;Kitchin, 2014). Though initially focusing on the emerging 'online' spaces such as social media and internet-driven consumption, this literature is beginning to study how 'offline' behaviour is captured, manipulated, and shaped by our interactions with digital sensors and outlets (Gabrys et al., 2016;McArdle and Kitchin, 2016;Kwan, 2016;Knox and Nafus, 2018;Garnett, 2016). The social study of big data, in other words, is extending beyond direct human-technology interactions, towards digitalisation's mediating effects on human-nature interactions.
Like other digital technologies, digital agriculture can be 'precisely inaccurate ' (McFarland and McFarland, 2015: 1). High volume and granularity of data are often erroneously equated with high accuracy and reliability. This article focuses on digital yield monitoring and mapping, the sequence of interlinking technologies (Global Positioning System (GPS), sensors, processing algorithms, and satellite maps) that forms the backbone of most digitalisation strategies in agriculture.
We contend that the accrual of many smaller inaccuracies in this sequence presents the danger of a 'precision trap': A strong belief in largely unverifiable, potentially 'precisely inaccurate' big data that over time leads to an erosion of checks and balances (analogue data, farmer observation, etc.) on farms, but also in agricultural policy-making. This article identifies three conditions under which a 'precision trap' may pose a considerable risk factor in farmers' decision making. The first concerns a high degree of opacity of algorithms (cf. Carbonell, 2016;Diakopoulos, 2013); the second, a shift from big data-driven measurement and advice towards prediction into the future. The third condition is increased remoteness of the farmer from the daily field operations. These three conditions, we argue, hamper timely checks and balances regarding digital technologies, by farmers and other actors. While digital farming will clearly become more sophisticated over time, the three above-mentioned conditions are also likely to become more important in the years to come. As a result, the risk of negative repercussions due to inaccuracies might well increase rather than decrease.
We do not mean to argue that digital farming technologies should always be hyper-accurate to be of value. It is often the lack of awareness of (potential) inaccuracies (and the subsequent failure to put checks and balances into place) rather than the inaccuracies themselves, that causes problems. Under the right conditions, somewhat inaccurate technologies might generate 'good enough data' (Gabrys et al., 2016) for farmers to work with, whereas in other cases such inaccuracies might pose unacceptable risks. These risks tend to spread unevenly so thatbesides a 'precision trap' -we identify a 'precision divide' with accuracy and inaccuracies (and related benefits and risks) unequally divided across types of farms. Without more comprehensive reflection on the accuracies of digital agriculture, however, inaccuracies are likely to be downplayed or farmers (who as we will show do a lot of work to 'make' digital agriculture precise) are blamed for them.
Empirically, this article predominantly focuses on the Netherlands, as a frontrunner in digital farming in the EU. In addition, it features complementary examples from the US, Canada, and the UK as major countries in digital agriculture outside the EU. The methods consisted primarily of web and document research (especially of farm journals) in addition to the attendance of practitioner conferences, workshops, exhibitions, and hackathons relating to digital agriculture in the Netherlands.
Specifically, Visser examined all issues of the main Dutch farm weekly for relevant articles on the accuracy of digital farming technologies for the period 2019-early 2021. Quotes from this journal were translated into English by Visser. Earlier years (back to 2016) were scanned more selectively. For the broader international context, a web search of international journals such as Farmers Weekly (UK), Future Farming (International) and Farm Progress (US) was conducted.
The remainder of this article is structured as follows. The next section presents the theoretical framework. The third, empirical, section examines the (in)accuracies of digital agriculture, focusing on yield measurement and mapping, as a key pillar of precision agriculture in arable farming. Section four briefly discusses the tendency to blame farmers for inaccuracy. Section five explores various conditions under which less than accurate data might entail major risks, while section six discusses the emerging 'precision divide', followed by our conclusions.

Unpacking 'accuracy', big data, and algorithms
Agtech brochures and business reports widely sketch a picture of impressively accurate digital farming technologies, which far exceed what farmers would have been able to achieve without them. Promises such as 'precision of a centimeter', 'pinpoint precision' (Proagrico n.d.) or 'sub-inch accuracy' 1 (Puri, 2016), with digital farming technologies providing 'accurate, precise and reliable recommendations' (EC 2017: 9), which allow for 'taking out the guesswork' (ibid.), abound. Similar claims feature in scholarly articles, with statements heralding, for instance digital farming's 'quantitative data producing objective decisions' (Said-Rubio and Rovira-Más, 2020: 8), as opposed to traditional, supposedly 'intuition-driven' farming (Said-Rubio and Rovira-Más, 2020: 16).
Such claims of accuracy, and related assertions of the objectivity of data-driven and evidence-based decision making, require critical reflection. Limitations of technologies often remain 'black boxed', and this tendency is particularly pronounced for digital technologies and big data (Diakopoulos, 2013;Kitchin, 2014). Benefits tend to be highlighted whereas limitations are downplayed or sidelined through 'discursive engineering' (Klauser, 2018).
This paper aims to look beyond this discursive smokescreen and unpack the assumed 'accuracy' of digital technologies. Drawing on insights from critical geography and cartography, we will first stress the importance of understanding what 'accuracy' means, including its epistemological foundation and historical-political context. We then present a pragmatic approach on how the accuracy of digital farming technologies could be assessed. This is followed by a discussion of big data, algorithms, sensors, GPS, and maps, and their multiple inaccuracies.

Accuracy
What is meant by 'accuracy' varies between disciplines, and depends on context and purpose. Tracing the history of environmental mapping, Rankin (2020) demonstrates how accuracybased on algorithms that fill in blank spaces between individual points, such as weather stationsemerged as the dominant paradigm in mapping, and overtook the previously prevalent objective of 'realism'. Such 'accurate' maps are now ubiquitously used, from climate change, environmental pollution, to underground resources, and are also a common element of digital farming technologies. Realism and accuracy, he emphasises, are not the same, but rather represent 'a decisive shift in goals and output, from maps that are finely tuned to the qualitative knowledge of specific topics, scales, and places to maps that show all environmental phenomena as fundamentally similarand fundamentally quantitative' (Rankin, 2020: 4). This shift was driven by geostatisticians and the interests of resource exploitation, such as mining. Rankin's historical account points out that accuracy has a particular meaning that needs to be understood when engaging and interpreting its productsin the case of environmental mapping, accuracy refers to a cartographic representation of quantitative estimates of 'most-likely values' calculated by algorithms. Accuracy can become a trap if reflection on this foundation gets lost, and accuracy becomes an 'unquestioned epistemic value' (Rankin, 2020: 28), that crowds out more qualitative, situational and experimental forms of knowledge. In its ultimate form, a narrow focus on accuracy might lead to denouncing all factors that cannot (yet) be measured with high accuracy.
Concerned with a different goal, namely to assess the quality of data in the context of open data sources used for decision-making, McArdle and Kitchin (2016) suggest a number of criteria that can serve as a 'pragmatic catalogue' for examining the accuracy 2 of digital farming technology. Accuracy, here, is understood as the authenticity of data, and the extent to which they accurately (in terms of precision) and faithfully (in terms of fidelity, reliability) represent what they are meant to (McArdle and Kitchin, 2016: 446). Shi et al. (2013, cited in McArdle andKitchin, 2016: 447-8) suggest seven key metrics of spatial data accuracy/quality, based on the criteria of the International Cartographic Association (Guptill and Morrison, 2013); of which several are relevant for this article, namely 3 • Positional Accuracy. An indication of the horizontal and vertical accuracy of the co-ordinates used in the data, both to absolute and relative locations. It must account for the processes applied to the data which are described by the lineage. • Attribute Accuracy. The accuracy of the quantitative and qualitative data attached to the spatial data. • Completeness. The degree to which spatial and attribute data are included or omitted from the data-sets. It also describes how the sample is derived from the full population and presents the spatial boundaries of the data. Completeness (which Kitchin calls exhaustivity) is often confused with volume as will be discussed further on. • Logical Consistency. The dependability of relationships within the spatial data. • Temporal Data. The date of observation, the type of update, and the validity period for the data.
Both approaches to accuracy, as illuminated by the work of Rankin and Kitchin respectively, will be useful for the assessment below. The first approach underlines the need to reflect on the epistemological foundation of 'accuracy' as a central notion in digital agriculture, as there is a danger of an accuracy trap (Rankin, 2020), with a side-lining of difficult to measure factors. In digital agriculture, as will be discussed later on, the difficulty in accurately measuring complex ecosystem effects as compared to direct impacts on yields might reinforce a focus on the latter. The second approach is useful as a rather pragmatic assessment of actual digital farming operations.

Big data
One of the earliest, and still common, means of distinguishing big data from other forms of data is the '3Vs', that is to say the huge volume (i.e. enormous quantity), high velocity (i.e. created in real-time or near real-time), and diverse variety (i.e. being structured, semi-structured and unstructured) of big data (Kitchin and Lauriault, 2014). Other criteria that distinguish big data-sets are exhaustivity, resolution, indexicality, relationality, extensionality, and scalability.
Big data analytics, as Kitchin (2014: 2) writes, 'enables an entirely new epistemological approach for making sense of the world. Rather than testing a theory by analysing relevant data, new data analytics seek to gain insights 'born from the data'. However, regardless of how big data-sets might be, critical big data scholars contend that data cannot 'speak for themselves'. Dealing with data always remains an act of interpretation. Boyd and Crawford (2012: 665) identify this as the mythological dimension of big data, the 'widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy'. Critical perspectives on big data thus underline its frequent misconceptions as 'naturally' occurring and innately precise. Due to its opacity, big data tends to obscure the objectives and values of those who produce and process such data-sets (Kitchin, 2014: 9), including the selectivities and inaccuracies those might engender. McFarland and McFarland (2015: 2) argue that big data is often 'precisely inaccurate' as biases in the data 'are easily overlooked due to the enhanced significance of the results created by the data size'. Big data's statistically 'highly significant results caused by near-zero variances give the researcher a false sense of precision' (ibid 2015: 2). Big data's vast volume creates 'bigger haystacks' (Kitchin, 2014) in which one can endlessly conduct 'fishing expeditions' that end in spurious correlations (Knox and Nafus, 2018: 13). The sheer size of a big data-set can easily be confused with the statistical notion of a 'population' whereas, in reality, it is often 'a very biased sample' (McFarland and 2 The terms accuracy, quality, reliability, and veracity are often used interchangeably in the literature. For consistency, we only use the term accuracy throughout this paper. 3 In addition, Shi et al. (2013) distinguish semantic accuracy and lineage.
McFarland, 2015: 2), as data are not produced through statistically rigorously designed experiments, but often contain biases (Boyd and Crawford, 2012;Kitchin, 2014). While big data have the connotation of being exhaustive in scope, aiming 'to capture entire populations or systems (n = all)' (Kitchin, 2014: 1), in reality it is 'both a representation and a sample, shaped by the technology and platform used, the data ontology employed and the regulatory environment and (…) subject to sampling bias' (2014: 4).

Algorithms
Big data do not emerge naturally, but are generated intentionally, mediated by algorithms (Boyd and Crawford, 2012;Kitchin, 2014). Algorithms are designed to capture particular kinds of data; they are based on scientific reasoning and are informed by previous findings, theories and training, by 'speculation grounded in experience and knowledge' (Leonelli, 2012;in Kitchin, 2014: 5). Hence, a certain degree of subjectivity always characterises algorithms. Theories change over time, and findings from different studies have to be weighed up when composing an algorithm.
Technical definitions see algorithms as bits of code that undertake and/or automate a specific calculation -'a tool for solving a wellspecified computational problem' (Cormen et al., 2009: 5, cited in Burke 2019: 1). Definitions within the social sciences refer to the wider, real-world function of algorithms in producing data-based interactions in socio-technical systems (Burke 2019: 2). To understand an algorithm and its (side) effects requires going beyond its code, examining 'its wide socio-technical assemblage and its use' (Kitchin, 2017: 23). Burrell calls this the study of 'algorithms in the wild' (2016: 2).
As various authors (Burrell, 2016;Gabrys et al., 2016;Kwan, 2016) have convincingly argued, the high degree of opacity characterising algorithms is likely to increase the risks of inaccuracy, as it blinds users to the shortcomings of algorithms and leaves them unprepared to mitigate or prevent such risks. Algorithms harbour three main types of opacity (Burrell, 2016), namely intrinsic (related to the technological complexity), intentional (for instance to avoid copying or gaming the algorithm or hide manipulation of consumers) (Pasquale, 2015: 1), and illiterate opacity (low competency of users to grasp algorithms). Social science studies of digital agriculture mostly focus on the latter, that is to say how users are trained or able to use digital farm technologies (or not), at the cost of the first two.

Sensors
Rather than from direct internet use, which can directly generate all kinds of big data via an algorithm (such as data on the most used search terms), applications in smart agriculture mostly rely on an array of sensors, which constitute the ears and eyes of an algorithm generating big data. Sensors represent the materialities of the assemblage that digital agriculture constitutes in the words of Kitchin and Lauriault (2014). The features of the sensors, and their interplay with the materiality of crops and the environment will affect the generated data. Also, human labour is mostly required to make sensors work relatively accurately, through for instance calibration, and developing a 'feeling for error' as Garnett (2016) showed based on research on air pollution sensors.
Some types of sensors only use minimal software (such as flow meters in combines, which measure the volume of grain rather simply through detecting the attenuation of a gamma ray) whereas others, such as cameras for weed protection, need a powerful algorithm to generate useful data. Whatever the sensor, data processing is mostly needed to generate more refined, useable data. Algorithms, and often some human labour, are needed to turn such data into yield maps that farmers can use.

Maps
In agriculture, as well as in areas such as cartography, archaeology and parts of environmental and geographical studies, big data often have (as above) a spatial dimension and reach the end user in the form of a map. The trend towards digitalised, algorithmic mapping also goes hand in hand with connotations of sophistication, precision, and accuracy. However, as has been widely argued in critical geography and cartography, maps are neither neutral nor accurate representations of reality but social constructions (e.g. Rankin, 2020;Kwan, 2016). Widely used computational techniques such as kriging, which create smooth and detailed gradients in maps based on more spotty point-by-point measures (Rankin, 2020;cf. Kwan, 2016), represent one step in the processing of big data. In this phase errors can be mitigated through modelling. However, the visual persuasiveness and authoritativeness of maps may also strengthen the false sense of precision of 'precisely inaccurate' big data. Consequently, such big data generated maps may create uncertainty rather than accurate and stable findings (Kwan, 2016).
Overall, the theoretical literature on big data and algorithms discussed above suggests a view of digital farming that critically engages with notions of precision, accuracy, and objectivity. In the subsequent empirical sections, we aim to take the idea of digital agriculture as a 'socio-technical assemblage' (Kitchin, 2017: 23) of technologies, human inputs (and we would add, their interplay with nature) seriously, in order to locate potential (in)accuracies in the various components such as big data, algorithms, sensors, and maps.

(In)accuracy in digital farming: yield mapping
A key pillar of precision farming is yield mapping, and to do that properly a set of technologies has to function accurately, such as the GPS system guiding the tractor or combine (ensuring the 'positional accuracy'; Shi et al., 2013), and the sensors attached to it, which aim to precisely measure the yield (the 'attribute accuracy') at a particular position. Subsequently, the yield measurements are converted into yield maps. In this phase visualisation is combined with some modelling. Based on the yield data the combine sensors have collected, combined with soil data, an algorithm can generate site-specific advice on how much input (fertilizer, herbicides) to apply. Such an algorithm contains code which has been trained on big data sets collected from a large number of farms.
The proceeding sub-sections will examine inaccuracies, including the ways they may be hidden from sight, within each of the components of the assemblage that yield mapping constitutes. In doing so, various limitations in the realm of spatial data quality (Shi et al., 2013) will be discussed, of which some result from an 'implementation gap' (Sumberg, 2012). (Im)possibilities of farmers to locate or prevent inaccuracies in the socio-technical assemblage of digital farming will also be addressed (and more fully in the next section).

GPS and sensors
Earlier mentioned precision claims of proponents of digital agriculture lean for a large extent on the accuracy of the GPS connection and the subsequent precision with which GPS-steered tractors and combines can navigate in a field. The navigation of GPS-steered machinery is arguably the component of yield mapping (and digital farming in general) where real performance is closest to the claims of high accuracy and precision, yet there are some implicit caveats.
A Dutch farm journal that tested GPS guidance on tractors found relatively small deviations compared to the precision claimed by the manufacturers (typically 1-3 cm). However, the journal's test was conducted under a set of (near) perfect circumstances, namely with the best GPS system available, a convenient, moderate vehicle speed, no obstacles in the field, and reasonable weather conditions (Karsten, 2020). Under less ideal circumstances the accuracy of remote technologies is likely to drop. Even in a densely-populated country like the Netherlands, many farmers will use less accurate GPS networks (for instance because the more precise network is more expensive), while in remote places in, for instance, rural Canada or Australia network coverage can be even more problematic.
One important factor that explains the disappointing accuracy of some digital farming technologies is the 'implementation gap' (Sumberg, 2012) between the promised performance of technologies based on observations in rather ideal circumstances at trial fields, and the deviating (weaker) results based on adoption at ordinary farms with more diverse conditions. Even when digital technologies are tested on real farms, the combination of technologies might constitute an ideal ICT (Information and Communication Technologies) ecology rarely met in daily practice. When the GPS signal is less accurate, it negatively affects 'positional accuracy' (Shi et al., 2013).
The accurate measurement of yields via sensors (such as flow meters in combines that measure the flow of wheat being harvested), creates 'attribute accuracy' (ibid.) and constitutes another foundation of yield mapping. However, farmers already complained about the unreliability of yield mapping in the 1990s (Tsouvalis et al., 2000) and continue to experience problems with yield mapping to date. Problematic inaccuracies in yield mapping are widely reported by farmers and precision farming specialists in the Netherlands (Koerhuis, 2020b: 50-52;Meer, 2020: A14;Velden, 2019: 15), as well as in others countries (see Hart, 2015;Knuivers, 2016;McIntosh, 2020 for examples from the US, UK, and Canada respectively). One early adopter in the UK stated for example: '[S]trange, the quicker I drive during harvesting, the higher the final yield per hectare according to the sensor.
[…] Manufacturers have changed their focus to new GPS-techniques, like telematics, but they should actually go back to the basis and fix yield mapping. Yield measurement is the basis of precision agriculture. If the basis is not good, fine tuning is of little help.' (Knuivers, 2016: 31) Conversations with US AgTech industry personnel confirmed the above-mentioned inaccuracies, and indicated an error rate of 10 percent in yield monitors due to inaccuracies in GPS systems, faulty flow meters, and calibration errors (Keogh and Henry, 2016: 49). Of these three identified causes of inaccuracy, two (flow meters and GPS systems) largely originate from inside the 'black box' of technology, and outside the reach of the farmers to address.
The calibration of the yield sensors has to be done by the farmer in the field. Inaccuracy of yield sensors caused by a change in driving speed, as mentioned by the farmer above, is a widely encountered problem (Sudduth and Drummond, 2007;Tsouvalis et al., 2000). Calibration guidelines advise calibration at the required speed, and at least two or three different driving speeds (Sloan Implement, 2015). This can be a laborious task, as each calibration involves loading the harvested crop into a truck with a weighing instrument (which not every farmer owns). Each crop requires new calibration, so for farmers with a range of crops it is even more work. Some combines provide automated self-calibration, but they do not address all aspects, like calibrating for moisture level (LG Seeds, 2020). Further, the advice to farmers to try to maintain a steady speed, and especially to avoid stops or sharp alterations in speed, is difficult to apply for Dutch farmers, who have fields with irregular shapes and obstacles like electricity pylons or waterways. As a result, conducting a calibration and judging the accuracy of yield measurement when sudden changes in driving speed occur, all require human interpretation. The accuracy of sensors is far from given, but rather has to be actively established by farmers based on their experience. Farmers, like the scientists calibrating air pollution sensors studied by Garnett (2016: 1) have to develop a 'feeling for error', in order to 'know what to look for' when conducting a calibration (ibid).
As with GPS, for sensors there is an implementation gap (Sumberg, 2012). Whereas in trials, sensors are tested within a limited time frame, extended use at real farms exposes them to the negative effects of, for instance, the accumulated wear and tear of dirt (Staalduinen, 2020), and the long-term effects of weather. A Dutch tomato grower stated that his sensors were 'quite sensitive to dirt, so they weren't very reliable. You used to have to check them a lot, and that's not what you want when you buy a sensor' (Staalduinen, 2020: 50). Similarly, in dairy farming, the 'autonomous' milk robots require extensive daily check-ups 'before, during, and after milking' of at least some of the cows by the farmer, otherwise errors can cause 'a lot of damage' (Boerderij, 2021: 40). The unreliable functioning of sensors when left unattended in conditions of dust, dirt and moisture is one of the reasons why in farming 'the performance of algorithms can have side effects and unintended consequences, and left unattended or unsupervised they can perform unanticipated acts' (Steiner, 2012;cited in Kitchin, 2017: 19).

Maps
Errors originating from GPS or sensors, or lots of missing values, will affect the subsequently generated yield maps. The high resolution of yield maps which sometimes allow to 'zoom in to 8 mm' (Stevens, 2020: A21), may create a false sense of precision. In the case of a Dutch flower farmer, the above-mentioned 'precision', was based on a drone scan with a standard GPS precision of (only) 1.5 m, which resulted in the position of all the scanned locations deviating a full crop bed distance from the real locations, as the farmer suspected and more spatially accurate ground level RTK-GPS measurements confirmed (Stevens 2020: A22).
Similarly, the high granularity of soil maps, with values for each point in the field, can be deceptive as it is normally based on a limited number of soil tests 4 in various places in the field which are then interpolated across the whole map through the technique of kriging, widely used in GIS (Geographic Information System) (see e.g. Said-Rubio and Rovira-Más, 2020: 9). It assumes gradual transitions in values between points of measurement, which is not always the case in reality. 5 While kriging is commonly taken for granted within studies on digital agriculture, it has been more critically examined in studies on mapping, which warn of an 'accuracy trap' (Rankin, 2020), when an excessive focus on accuracy goes at the cost of other values, such as contextualization and non-quantifiable knowledge. Mostly, kriging will estimate gradients of variety in a field quite precisely, but in farms fields with strong variety in soil composition and sharp and/or irregular spatial alterations in soil, real in-field variation might deviate more significantly from soil maps.
A US farmer 6 bluntly described the inaccuracies hidden behind a veil of sophisticated looking maps, graphs and benchmarks generated by precision farming agtech providers as follows 7 ' … accurate and effective? Nahhh, nobody cares about that, especially when your product is fancy looking AND first! (…) I'm not against innovative technology. I am just heavily in favor of good, solid science in any technology (…), no matter how impressive the smoke and mirrors … ' (Johnson, 2017). 4 For crop monitoring sensors are also normally only placed selectively at 'critical locations' in the field (EC 2017: 9), moisture sensors for instance one or two per field (respectively Meer, 2020: A14; Koerhuis, 2020a: 52). 5 Cf. Kwan (2016) for a critical review of mobility mapping studies based on big data generated by cell phone towers, in which inaccuracy originates from the assumption that spatial movements of people are straight. 6 The farmer was well aware of the data inaccuracies, as they came from his own fields and equipment. 7 Also Tsouvalis et al. (2000: 918) cite a UK farmer on sophisticated yield maps that are in fact 'not that precise'.
In sum, the maps with impressively high granularity central to digital agriculture, are often 'precisely inaccurate' (McFarland and McFarland, 2015), giving farmers a false sense of accuracy, with the risk of insufficient corroboration, undetected errors, and subsequently negative outcomes.

Algorithms and (in)accuracy
Aside from problems caused by hardware (such as yield meters, other sensors or drone cameras), inaccuracy can also originate from the software, and in particular the algorithm. As an actor involved in digital agriculture in New Zealand stated 'you've to get the science and the experience packaged up in a way which works in the software' (Rijswijk et al., 2019: 7). Indeed, there is always a certain degree of subjectivity behind algorithms. The theories or assumptions upon which algorithms are based can change over time, and findings from different studies have to be weighed up when composing an algorithm. In precision farming, as Meijering (2016: 34-35) points out '[s]ite specific advice is complex, because the varying parameters often work against each other.' Despite these challenges, and the related uncertainty, the algorithmic advice in agriculture mostly shows strong correlations, carrying a strong veil of accuracy. These seemingly precise outcomes are a result of the high volume of big data.
However, the volume of the data does not say much about the usefulness of the data for the algorithmic task at hand. The sensors which collect the data, aside from direct errors in measurement, mostly also contain some limitations in terms of completeness (Shi et al., 2013) of data gathered ('sampling'). An automated soil sensor might conduct measurements every minute, providing an impressive large data volume and also completeness in terms of the temporal dimension (at least from the point the sensors were installed onwards). However, as was early noted, soil sensors are typically only located in one or a few points in a field. As a result, the completeness of data along the spatial (or positional) dimension (ibid.) is rather limited, even if that may be camouflaged by mapping techniques like kriging that interpolate from a few positions to create detailed maps. Particularly, when the soil variety is larger than assumed, the limited coverage of the spatial dimension will introduce inaccuracy.
Biases do not only concern the concrete data from the field for which the algorithm generates advice, but also the wider data from fields of numerous farms on which the algorithm has been trained. Such biases are even more difficult to detect, as the farmer has no knowledge about those fields. Furthermore, these biases easily go unnoticed as such a big data-set, by virtue of its size, will produce extremely small variances which are 'essentially zero' (McFarland and McFarland, 2015). This leads to extremely statistically significant results which give 'a false sense of precision' (ibid.). Erickson et al. (2013: 26, cited in Miles, 2019 state that an algorithm constitutes 'a finite, well-defined set of rules to be applied unambiguously in specified settings'. The code, and the cases on which the algorithm was trained determine in what specific settings it can accurately be applied. However, an algorithm rarely comes with guidelines clearly indicating its limitations. Consequently, farmers as well as social scientists studying digital agriculture, can hardly judge an algorithm before use, and instead have to study 'algorithms in the wild' (Burrell, 2016: 2), as 'we can only know how algorithms make a difference to everyday life by observing their work in the world under different conditions' (Kitchin, 2017: 26). Agriculture's 'different conditions' show a bewildering variety, due to the enormous spatial differences (such as local agro-climatic conditions) as well as the impactful temporal changes. We first take a look at the latter.
One limitation of the data recently generated in digital farming is that it rarely goes back further than a few years. The popular view of big data features a strong focus on velocity (one of big data's 3 Vs). The (near) real-time generation of data obviously offers clear advantages, as it facilitates a rapid response. However, the predominant focus on the present can hamper reflection and the use of longer-term, historical data series. This tends to have negative effects in agriculture. Most farmers know that one should 'never' take 'any notice of a one year's experiment' (Tsouvalis et al., 2000: 917), and yield maps are known to vary strongly between years (Heijting et al., 2011). In a study that compared Dutch farmers' own experiential knowledge of field variation with data generated through precision farming techniques, it appeared that some statistically insignificant correlations were seen as meaningful by farmers (Heijting et al., 2011). Like data users studied by Dudhwala and Larsen (2019: 11), farmers often experience 'technological dissonance' when algorithmic output is at odds with their own circumstantial knowledge. Heijting et al. (2011: 504) convincingly argue that such knowledge may well be more accurate because 'information gathered throughout years of practice will be integrated and level out the effects of specific conditions in a certain year'. Farmers remembered for instance what in-field variation looked like in extreme weather circumstances years ago. Regarding temporality, data accuracy is thus not just about how close it approximates the ideal of real-time generation, but also about the 'completeness' (Shi et al., 2013) of data temporally in terms over longer periods. The dynamically changing context with countless changes in crop stages and weather alterations through the year, poses further difficulties in generating accurate algorithmic advice. An algorithm trained at weed detection amongst sugar beets based on images of the crops early in the season already became less accurate just a week later in the growing season (Tholhuijsen, 2019: A22). With sharp weather changes the errors would likely have been even bigger.
With more training on expanding sets of data, an algorithm's accuracy is likely to improve, but the challenge of achieving the desired accuracy remains huge: 'Variation in crop varieties, growth stages and soil have a great impact on how the crop looks like, and then we even didn't mention yet the influence of the weather, such as dry and wet years. A whole lot of examples have to be collected before the system has sufficiently learned. For the recognition of potato "weed" in sugarbeets this maybe is still feasible. But if we want to detect various crops and multiple weeds, this becomes a bigger challenge. Especially in the sprouting stage, when weeds can be controlled best' (Tholhuijsen, 2019: A22).
The article's conclusion that '[i]n fact, one has to have a couple of examples for all situations one can encounter in the field', seems an unattainable ideal, especially as climate change and the environmental changes it triggers introduce a whole layer of additional unpredictability. Aside from more unpredictable weather, this includes for instance the spread of novel weeds, insects, other animals and diseases. Introduction of new crop varieties in response to climate change and constantly changing regulations around agriculture, add further dimensions of complexity and dynamism.
In short, the bewildering variety of settings and temporal variety in which algorithms have to operate in agriculture tends to affect their accuracy negatively. But who, or what, is to blame for resulting inaccuracies, the technology itself or the farmer who has incorrectly applied the precision farming technology?

Attributing inaccuracy: who/what is to blame?
We found that the widespread assumption of superb accuracy and veracity of digital technologies also means that, when mistakes occur in digital agriculture, there is a strong tendency to attribute this to the farmer. Errors are frequently assumed to originate from the farmer who is seen as insufficiently tech-savvy, or ignores algorithmic advice (e.g. Said-Rubio and Rovira-Más, 2020). The president of CEMA, the European association of AgTech manufacturers, for instance, called the farmer 'one of the weakest components' of digital agriculture (Markwell, 2016). Even amongst some farmers the idea, as voiced by a US farmer, prevails that the data-driven 'combine will tell the truth', who added that 'When you go through your machine, you know it's right' (Miles, 2019: 4).
While part of the explanation for disappointing results of digital agriculture certainly originates from the side of the farmer, it is not helpful to ignore or downplay the limitations of technology itself in suboptimal outcomes. Obstacles on the side of farmers already get abundant attention in social science literature (Higgins et al., 2017;Ingram and Maye, 2020: 2;Said-Rubio and Rovira-Más, 2020: 17), whereas inaccuracy originating in the technologies is more or less taken as a given, and therefore has hardly been the subject of comprehensive study in the (social science) literature on digital farming.
A Canadian precision farming consultant voiced in an interview that the algorithms currently used to translate farm data into results and advice are 'still in their infancy' (McIntosh, 2020: 44). 8 Similarly, a leading Dutch scientist in the field of implementation of precision farming concludes that the idea that precision farming technologies are ready for implementation on the farm, with off the shelf solutions, has turned out to be an illusion: 'One thing has become clear: it all doesn't go as rapidly as expected' (Tholhuijsen, 2020: A12). 9 He identifies artificial intelligence as a weak link within digital agriculture: 'We observe that the techniques are not yet finalized. And that things are going too slow. Weed detection for instance. The development still goes too sluggish. It makes progress with difficulty. (…) The development of AI (...) is difficult. (…) Collecting images, annotating, labeling what it [the plant] is, that is very difficult' (Tholhuijsen, 2020: A14).

The 'precision trap': when unquestioned accuracy becomes a risk
Having discussed the wide range of inaccuracies above, this section and the next one move to the potential repercussions they could have, centered around the concepts of a 'precision trap' and a 'precision divide'. As these technologies are still in development, the repercussions will only become clear over time, when farmers start to rely more heavily on them. Therefore, the discussion below is necessarily tentative.
However, we argue that the above-discussed issues of inaccuracy are likely to become especially relevant, and potentially risky, when the (mis)functioning of digital technologies is not sufficiently corroborated based on other knowledge. Such corroboration can be based on in-field farmer's observations and experiential knowledge, or checks via alternative scientific knowledge, such as via modelling and proofing by independent experts, like academics, scientists at regulatory agencies or through citizens' science (which can be tech-savvy farmers or data analysts in environmental NGOs).
The widespread belief in the precision and accuracy of precision farming, as well as the associated tendency to blame farmers for inaccuracies, hamper an open attitude to (in)accuracy and might obstruct putting in place sufficient checks and balances based on non-digital data and knowledge. We contend that there is the danger of a 'precision trap', a stark belief in the overall precision of digital technologies inspired by some seemingly precise output generated by big data, that hinders identifying (in time) the inaccuracies that may arise. 10 Farming operations may unknowingly enter a 'precision trap' based on three factors, discussed below: opacity, remoteness and forecasting.

Opacity and (in)accuracy
The issue of transparency (versus opacity) of big data and algorithms in agrifood studies and practices has received little attention beyond the issues of traceability and transparent ownership and user rights of data (Carbonell, 2016;Jakku et al., 2019: 1, 5).
Within the literature on big data and particularly on the politics of algorithms beyond agriculture, there is growing attention for the opacity of algorithms (Burrell, 2016;Diakopoulos, 2013). In agriculture, data scientists and AgTech actors tend to see the opacity of algorithms as unproblematic. A Dutch scientist involved in the design of self-learning algorithms for agriculture for example stated: 'Which features the algorithm precisely takes into account, is unknown, that is the great black box'.In fact, it also doesn't matter' (Tholhuijsen, 2019: A21).
In one of the hackathons Visser was involved in a similar stance of the data scientists contrasted markedly with the view of the social scientists involved. The latter thought that obtaining insight into which factors the algorithm prioritized was key to assessing the accuracy of the algorithm. The data scientists, however, asserted that it was best to leave this fully to artificial intelligence, with the algorithm, through selflearning, dynamically deciding which factors to prioritise.
Besides this intrinsic opacity of algorithms (Burrell, 2016) caused by the complex codes discussed above, intentional opacity (ibid.) also features. John Deere tractors are infamous for the way they seal off both the software and hardware (Carbonell, 2016), making it difficult or even impossible for farmers to repair their tractors and to gauge what data is collected and how it is used.

Forecasting and (in)accuracy
Risks related to inaccuracies are likely to increase substantially when big data is employed to make predictions for the future. Google Flu Trends is an algorithm that was often cited as exemplary for the great predictive potential of big data. However, bigger data do not generally lead to better forecasts, certainly not without appropriate theories and/ or models (Hosni and Vulpiani, 2018). While initially delivering good forecasts of flu trends, a detailed study using surveillance reports from laboratories across the US showed, however, that Google Flu Trends later on predicted more than twice the number of flu-related doctor's visits than occurred, among others due to the continuous change in the configuration of the search algorithm (Lazer et al., 2014).
In agriculture, with the uncertain influence of weather on future production, the promise of more pro-active decision-making based on forecasts has allure. Models predicting, for instance, pests before they are visible on the crops would help farmers to spray pesticides at just the right moment, thereby preventing the pest from emerging while using less pesticides (Said-Rubio and Rovira-Más, 2020: 9).
If the model is incorrect, however, there are unnecessary financial and environmental costs. A farmer who starts to make decisions based on forecasting models thus becomes heavily dependent on the reliability of the underlying models, as it might not (yet) be possible to corroborate the automated advice with infield, human observations. An item in the 8 The quote is from the article based on an interview with the scientist, but not with certainty a literal statement by the latter. 9 See previous note. 10 Our notion of a 'precision trap' is inspired by the 'inaccuracy trap' posited by Rankin (2020: 28), but is different. The 'precision trap' refers to the tendency to overlook actual inaccuracies (or attribute them to other factors, such as the farmers), due to the preconception that digital agriculture is precise. The 'accuracy trap' refers on a more abstract level to accuracy being treated as an 'unquestioned epistemic value', which can be a trap as it 'narrows our viewand our values' and leaves little room for 'lived experience and other bottom-up counter knowledges' (Rankin, 2020: 28).
Dutch farm weekly Boerderij about preventive spraying of weeds based on sensors and predictive algorithms (Meijering, 2016: A57) also noted that a ' [d]isadvantage is that one does not really have a view on the effect'.

Remoteness and (in)accuracy
A growing distance of the farmer to what is going on in the field, due to the growing size of farms, and farmers spending more time in their office to manage (part of) the farm processes from their computer and phone reduces the possibilities of daily human observations, with subsequently less room for checks of data-driven operations. This distance to the field is even more pronounced among the growing sub-group of transnational farmers, whether globally engaged family farmers (Cheshire and Woods, 2013) or corporate and financialized transnational farms (Kuns et al., 2016).
The Dutch farmers Visser spoke to widely expressed the view that digital technologies will only support and/or enhance decision making by farmers, rather than replace them. They felt that being in the field (even if somewhat less frequently) will continue to be general practice (cf. Velden, 2019), and that farmer knowledge based on, for instance, the history of a field and in-field visual inspection (including the use of other senses such as smell to detect changes in soil health; Lerink and Klompe, 2016: 12) will remain essential. 11 However, with ongoing increase in farm size, farmers tend to have less time to spend in each field. The increased scale and distance between farmer and field carries risks, as inaccuracies are less likely to be detected by a farmer who only infrequently passes by a certain field or sensor. Furthermore, in larger farms local observations by farm workers might not reach the farmer who makes the decisions. In large, transnational farm companies this problem is even more pronounced (Kuns et al., 2016).

The 'precision divide'
The above-discussed inaccuracies of digital agriculture, and related risks, are likely to become exceedingly unevenly distributed. Various studies have noted a digital divide, with smaller farms facing major obstacles to adopting digital technologies, due to, for instance, the high costs of the technologies, and the difficulty to employ or source consultancy on digital farming (Bronson, 2019;Carbonell, 2016;Rotz et al., 2019;Schimpf and Diamond, 2020).
We argue that besides a digital divide, a 'precision divide' is emerging, with unequal benefits in terms of accuracy provided by digital technologies to different types of farms, even if they manage to adopt digital technologies (and thus overcome the digital divide). The 'precision divide' might be strengthened by a digital divide, which mostly refers to unequal access to, and use of, the internet (or digital technologies more generally), for instance when farmers have weak internet connectivity and receive incomplete data leading to inaccuracies. However, overall the precision divide is more directly a result of what is called the 'algorithmic divide' (Yu, 2020), namely unequal benefits from algorithms and AI. In addition, also farm technologies that have a clear hardware component, such as sensors, and their accuracy play a role.
The precision divide emerges from the fact that both hardware and software (algorithms) are primarily developed for a select set of farms, mostly large-scale, commodity crop farms focused on a few staple crops (Bronson, 2019), while different forms of agriculture might require different sensors, and divergent sets of indicators to be collected and analysed. Especially the fact that technologies are primarily designed for a few selected staple crops (like wheat, corn and sunflower) creates a 'precision divide', namely between farms focused on staple crops (mostly also larger and more industrial) and those cultivating other crops like potatoes, sugar beets, vegetables, or flowers.
Sensors used to record yields developed for staple crops like grain and corn are less accurate for vegetable farming, for instance because weight measurements are distorted by soil residues, or due to mud cluttering the sensor (Velden, 2019: 15). As a Dutch farmer observed: "We also started using yield monitoring on our onion loader, but the data was not accurate. And the machine also did not really fit into our context. You notice that yield monitoring in our crops is a disaster, you cannot measure that." (ibid.: 15).
Further, digital software and platforms and their designers also predominantly target a rather limited set of the most used staple crops (Bronson, 2019). One of the first smart farming services, Field Scripts (provided by Monsanto), for instance, worked on a two-year crop history to provide advice to farmers on how and when to apply inputs. This might work for mono-cropping of crops such as corn, grain or soya, although industry representatives suggested that at least five years of yield, fertilizer and weather data are needed to make analytics products useful and robust (Keogh and Henry, 2016: 13). For crops with strict crop rotation requirements such as potatoes, which are normally sown only once every three to four years on the same plot, a two-or five-year data record for the same field would be insufficient, and lead to less accurate advice. The weak applicability of general algorithms and crop models to non-staple crops might not only lead to less precision, it might even lead to completely wrong advice. A Dutch farmer, stated the following about the applicability of general software to lily cultivation: 'Normally a dense and high crop with a high NDVI [a metric for biomass indicating crop growth, that is based on images collected by satellites or drones], are more prone to fungi infections and a reason for [applying] a relatively high dose of fungicides. By lilies, however, it is so that short and dense plants stay wet below, whereas taller plants, even if they are already discoloured somewhat yellow, stay drier below and in the end also are healthier. Precisely the reverse as with staple crops. This made it difficult to draw the right conclusions and to make the right task maps. In addition, no appropriate algorithms for Botrytis [a fungus which is a common threat to lilies] are available' (Koerhuis, 2020a: 24).
More holistic farm practices such as intercropping, the use of complex rotation schemes, permaculture or integrated crop and livestock farming are likely to pose even greater challenges (cf. Velden, 2019: 27; Bronson, 2019). Subsequently, there is a risk of self-reinforcing processes of selectivity, concentration, and accuracy around a narrow set of crops (and farm styles). Digital agriculture platforms providing farming advice such as Climate Corporation focus their efforts on a select number of widely planted crops. As more farmers use these selected platforms, the algorithms are fed with expanding big data-sets, which enables more precise modelling and outcomes, just as the Google search algorithm gets better and better (compared to other search engines) as more internet users use Google.
As a result, farmers cultivating the few select crops predominantly targeted by the platforms can reap most benefits from advances in digital agriculture, as sensors and algorithms for their crops and farm styles are relatively accurate. There is subsequently the risk of a self-reinforcing precision divide between highly digitalised farms around an increasingly narrow set of crops with increasingly sophisticated technologies at their service on the one hand, and farms cultivating other crops having to get by with technologies ill adapted to their operations (with consequently less accuracy), who experience little benefit from ongoing digitalisation on the other.

Conclusions
This article critically examined the promise of superb precision and accuracy that widely features in accounts of digital agriculture. It showed that weak GPS reception, sensor errors, as well as spatially or temporally incomplete and self-learning algorithms that are still early in their learning process can all cause inaccuracies that are easily hidden 11 Cf. Rijswijk et al. (2019: 9) for a similar view by farm advisors in New Zealand. by a high volume of data and sophisticated maps (with high granularity). As a result, digital agriculture is often 'precisely inaccurate' (McFarland and McFarland, 2015), giving a false sense of accuracy.
In the AgTech sector, the inadequate or inaccurate functioning of technologies, if acknowledged, is portrayed as a transitory problem as technologies will constantly become more powerful and precise. This view can also be encountered in the social science literature, where inaccuracies are labelled, for instance, as 'teething problems' (Jakku et al., 2019: 6). We do not contest that technologies will advance further. However, assuming that they are on a steady and steep curve towards evermore hyper-accuracy is problematic in several ways.
First, there is the danger that research will attribute blame for inaccuracy predominantly to the farmers who have to learn how to adopt, use, or 'digi-grasp' (Dufva and Dufva, 2018;Rijswijk et al., 2019: 11) digital technologies. AgTech representatives, but also academic studies, tend to highlight the farmers' role in mis-adoption and 'operator errors'. This reflects a de-contextualised view of technologies as abstract solutions, with implementation and use problems detached from them, and relegated to the 'human factor'. This article, on the contrary, appreciates that an algorithm, or digital farming technology at large, cannot be grasped without examining 'its wide socio-technical assemblage and its use' (Kitchin, 2017: 23). Our examination of this assemblage and use in digital agriculture highlights farmers' important role in preventing errors and shaping digital technologies' accuracy, when calibrating technologies or corroborating algorithmic advice.
Second, to assume that inaccuracies of digital technologies are just a temporal blip hampers scrutinizing the more fundamental tensions and limits connected to digital agricultural technologies. If big data and algorithms indeed turn our world into a 'black box society' with hidden algorithms controlling ever more aspects of our lives, as critical scholars of algorithms like Pasquale (2015) assert, then we cannot afford to leave these technologies unexamined.
Third, even when the code of algorithms improves and future field trials with digital technologies show enhanced accuracy, this might not always translate into increased accuracy in diverse everyday farming operations. Following Burrell (2016) and Kitchin (2017) we contend that algorithms and their effects can only be grasped 'in the wild'. Subsequently, the highly diverse and dynamic natural context in which they have to operate becomes just as important in determining accuracy as the code itself (and the other socio-technical aspects). It appears that the enormous variety of plant growth stages and 'normal' weather variation currently pose considerable challenges to technologies, varying from sensors to artificial intelligence. Climate change will highly increase this unpredictability, with more extreme weather, new plant diseases, and pests. This is likely to widen implementation gaps (Sumberg, 2012) of new technologies. Rising unpredictability and the subsequent lack of a stable, relevant baseline will make fine grained analysis of data a major challenge. This might mean that rapid technological advances only result in sluggish progress in terms of accurately measuring and influencing conditions in the field. This article identified two major risks of inaccuracies in digital agriculture, which we termed the 'precision trap' and the 'precision divide'. The 'precision trap', refers to a stark belief in the overall precision of digital technologies inspired by some seemingly precise output generated by big data, that hinders identifying (in time) the inaccuracies that may arise. Such a trap is likely under at least three conditions which are all likely to become more prevalent: increased opacity of digital technologies (particularly due to algorithmic opacity); increased remoteness of the farmers from the field; and a move from real-time measurement and advice to forecasting. These conditions impede corroboration by farmers of the data and advice generated by digital technologies.
Also, there is the danger of an 'accuracy trap' (Rankin, 2020), namely a denouncing of all factors that cannot (yet) be measured with high accuracy. Farmers' preoccupation with the nitty gritty of precisely measurable individual parameters and variable rate application might for instance distract them from deepening a holistic understanding of key determinants of cultivation, such as the soil, and the more difficult to measure environmental effects. A group of Dutch farmers for instance wondered 'Does precision farming's attention for the square meter not distract from what is much more important, namely improvement of the soil?' (Tholhuijsen, 2020: A14).
The 'precision divide' refers to the unequally distributed precision benefits resulting from the growing 'algorithmic divide' (Yu, 2020) between farmers focusing on staple crops (mostly commercial and more industrial farms) and farmers cultivating other crops. Digital innovation focuses primarily on the first type of farmers, who consequently benefit from more advanced algorithms and relatively precise or 'good enough' data. At the same time, other farmers have to make do with less advanced or less applicable algorithms for their crops, which can be 'a disaster' as one farmer stated. Furthermore, the precision divide is likely to deepen and become multi-layered also within the group of farmers producing the staple crops targeted by the digital tools, as it is likely that higher-paying 'prime' options will provide some users with more accurate and fine-grained advice, while others might need to content themselves with 'basic' algorithmic advice. Earlier research on digital agriculture has identified a digital divide, namely unequal obstacles to access and use of digital technologies. The 'precision divide' goes beyond that by pointing out that even farmers who have managed to access and use new technologies may still experience major disadvantages depending on their crops and ability to pay for services. These farmers find an unequal share of digital farming's inaccuracies on their 'plate'.
Overall, turning digital farming into genuine precision farming requires substantial work from the farmer to tweak sensors, check yield maps, and corroborate algorithmic advice. For farmers with the right (staple) crops that may well result in 'good enough' data and quite precise operations. However, especially for farmers with crops that fall outside the gaze of the providers of digital technologies, digital farming might feel more like 'imprecision farming'.