We are so used to precise computing in which designs must be verified as precise, implementations must be tested to ensure precision, programs must be precise, and algorithms must compute precise outputs. How- ever, for many real problems, an approximate solution is sufficient. In fact, a lot of interesting problems that we want to solve using computers today need only approximate answers. Achieving meaningless precision in such cases incurs excessive power and energy to perform computations that, at the end of the day, are useless. For example, a couple incorrect pixels in a video stream are likely to go undetected by the user. Likewise, there is typically some tolerance for inaccuracy in machine learning problems as long as the answer to the problem is close enough. Approximate computing forms a radical paradigm shift in energy-efficient systems design and operation, based on the idea that we are hindering computer systems efficiency by demanding too much accuracy from them. Interestingly, perfect answers are unnecessary in a large number of application domains, such as DSP, statistics, and machine learning. In such error-resilient applications, approximate computing aggressively decreases energy dissipation by relaxing the correctness of the performed computations. The number of smart, network-enabled devices has grown exponentially, already surpassing the worlds population. These connected devices spend most of their time collecting input sensor data from their physical surroundings. Naturally, this data is prone to noise and analog variations, many of which are imperceptible to humans. Handling such data with the same robustness as handling passwords and bank accounts would be wasteful. On the other hand, it is impossible to represent all possible numbers in a finite digital system; some precision must be inevitably sacrificed. This has become more relevant, with many modern systems now increasing support for non-standard bit-widths and arbitrary fixed-point, floating-point, and even stochastic numerical representations. The fundamental challenge is in marrying the precision requirements of applications with the resource availability in hardware. Approximate computing introduces fundamentally new research avenues and requires vertical optimization. In the classical computing world, improving efficiency still requires 100-percent correct execution. Under approximate computing, some level of error is tolerable for larger gains in efficiency. The paradigm of approximate computing has led to a flurry of research activity over the past few years. In hardware design, approximate computing mainly targets arithmetic circuits such as adders and multipliers. Due to the increased complexity of hardware multipliers, the potential benefits of applying disciplined approximate computing in their design can deliver significant gains regarding the applications performance and energy consumption. Therefore, in recent years, increased research activity is observed in the design of approximate multipliers. GOALS OF THE PROJECT AND EXPECTED RESULTS In the proposed project, we will try to investigate and answer the following questions: How can we keep up with incoming sensor data in the presence of frequent power losses? How can we design cheaper hardware multipliers yet still maintain high accuracy? How can we design variable-precision multipliers for machine-learning accelerators? How the state-of-the-art algorithms in image processing and computer vision adapt to an error introduced by approximate computing The expected results are: we will explore the current state-of-the-art design space for multipliers, we will introduce a new, efficient, and easily applied multiplier approximation design based on a hybrid design strategy, we anticipate that the proposed multiplier would deliver good results, compared with state-of-the-art approximate multipliers in terms of hardware and energy efficiency and accuracy. We expect to achieve up to 30-40 % reduction in area and up to 50 % in power consumption with a negligible computational error. Complementarity groups: 1. Researchers from the University of Ljubljana have extensive experience with sensor networks, embedded systems, hardware implementation of algorithms in FPGA, approximate computer arithmetics and heterogeneous computing systems (CPU / GPU / FPGA). 2. Researchers from the University of Banja Luka have extensive experience in the field of digital signal and image processing, sensor networks and machine learning. The research groups from UNI BL and UL FRI have been successfully cooperating for years and have published a number of papers in the field of approximate arithmetics, image classification and sensor networks. Planned contributions from the researchers from SLO: The research group at FRI will cooperate on energy efficient implementation of the multiplication algorithms in CMOS circuits. This group will carefully analyse the use of power and resources in the chip and the proposed arithmetic, in terms of space and power efficiency. Planned contributions from the researchers from BiH: Researchers from the Laboratory for Digital Signal Processing, ETF, Banja Luka, will explore the state-of-the-art algorithms in image processing and computer vision and identify the possible candidates in which approximate computing will be used. They will also assess the impact of the error caused by the use of the approximate arithmetic in the state-of-the-art algorithms for image processing, image recognition and image classification.