ZHANG, Renjun, ZHANG, Tianming, CAI, Zinuo, LI, Dongmei, MA, Ruhui und RAJKUMAR, Buyya, 2025. Memoria Nova: Optimizing Memory-Aware Model Inference for Edge Computing. ACM Transactions on Architecture and Code Optimization. 19 März 2025. Vol. 22, no. 1, p. 1-25. DOI 10.1145/3701997.
Elsevier - Harvard (with titles)Zhang, R., Zhang, T., Cai, Z., Li, D., Ma, R., Rajkumar, B., 2025. Memoria Nova: Optimizing Memory-Aware Model Inference for Edge Computing. ACM Transactions on Architecture and Code Optimization 22, 1-25. https://doi.org/10.1145/3701997
American Psychological Association 7th editionZhang, R., Zhang, T., Cai, Z., Li, D., Ma, R., & Rajkumar, B. (2025). Memoria Nova: Optimizing Memory-Aware Model Inference for Edge Computing. ACM Transactions on Architecture and Code Optimization, 22(1), 1-25. https://doi.org/10.1145/3701997
Springer - Basic (author-date)Zhang R, Zhang T, Cai Z, Li D, Ma R, Rajkumar B (2025) Memoria Nova: Optimizing Memory-Aware Model Inference for Edge Computing. ACM Transactions on Architecture and Code Optimization 22:1-25. https://doi.org/10.1145/3701997
Juristische Zitierweise (Stüber) (Deutsch)Zhang, Renjun/ Zhang, Tianming/ Cai, Zinuo/ Li, Dongmei/ Ma, Ruhui/ Rajkumar, Buyya, Memoria Nova: Optimizing Memory-Aware Model Inference for Edge Computing, ACM Transactions on Architecture and Code Optimization 2025, 1-25.