Computing in memory is not a new concept. As early as the 1990s, there were discussions on the prototype of computing in memory, but the development of hardware was limited at that time, and no further research was carried out. There is no unified definition of the concept of computing in memory.GridGrain gives this explanation about memory computing: by using a middleware software, data is stored in memory in a distributed cluster and processed in parallel.Techopedia believes that as memory prices drop sharply and memory capacity increases, it is better to store information in dedicated server memory instead of slower storage disks. It can help business users quickly perform pattern recognition and timely analysis Big data is the so-called computing in memory. Computing in memory is not only about storing data in memory, but also requires special design of software systems and computing models.Therefore, it can be seen that memory computing mainly has the following characteristics:(1) The hardware has a large-capacity memory, and the data to be processed can be stored in the memory as much as possible. The memory can be a stand-alone memory or a distributed memory, and the stand-alone memory must be large enough;(2) Have a good programming model and programming interface;(3) Mainly for data-intensive applications, with large data scale and high real-time processing requirements;(4) Most of them support parallel processing of data.To sum up, computing in memory is centered on big data, relying on the development of computer hardware, and relying on a new software architecture, that is, through major innovations in the architecture and programming models, the data is loaded into the memory for processing, and as much as possible A new data-centric parallel computing model that avoids I/O operations.