Boffins interrogate AI model to make it reveal itself
Briefly

"A hyperparameter stealing attack followed by parameter extraction can create a high-fidelity substitute model with the extracted information to mimic the victim model," the researchers explain in their paper, "TPUXtract: An Exhaustive Hyperparameter Extraction Framework." This method allows attackers to create substitutes at significantly reduced training costs.
"Because we stole the architecture and layer details, we were able to recreate the high-level features of the AI," explained Aydin Aysu. "We then used that information to recreate the functional AI model, or a very close surrogate of that model." This underscores the effectiveness of their attack in generating accurate model reproductions.
Read at Theregister
[
|
]