Nowadays, many applications are based on Deep Neural Networks (DNNs): autonomous vehicles, face detection, and even covid-19 detectors. Hence, improving the performance of DNNs is critical to scale such applications in the era of IoT and Big Data. Typically DNNs training has been done in 32-bit floating point, while the inference phase has been performed with reduced width integers. Version 3 universal numbers, aka posits, are a new format for representing real numbers introduced by John L. Gustafson in 2017 to mitigate the problems inherent in floating point, defined by the IEEE-754 standard. This format promises to achieve dynamic ranges similar to the floating point, reducing the bitwidth by half, so its impact on consumption, memory latency, operations execution time and more on any hardware system can be enormous. However, due to the novelty of the posits, they are not standard nor is there a system that runs them natively. This means that they have to be emulated through software libraries, which implies a huge loss of performance as soon as the application has a certain complexity. In the specific case of DNNs, the few studies tackling training with posits have achieved positive results on small feedforward-type networks. However, scaling training to DNNs like AlexNet, VGG or ResNet is a great challenge for the scientific community. The present project aims to develop a hardware accelerator based on RISC-V cores integrating the native support for posits, as well as the associated compilation software support, in order to efficiently perform the training of DNNs. Both the hardware and software developed will be publicly licensed for the benefit of the scientific community.