Refactor SCRAM code to dynamically handle hash type and key length
От | Michael Paquier |
---|---|
Тема | Refactor SCRAM code to dynamically handle hash type and key length |
Дата | |
Msg-id | Y5k3Qiweo/1g9CG6@paquier.xyz обсуждение исходный текст |
Ответы |
Re: Refactor SCRAM code to dynamically handle hash type and key length
(Peter Eisentraut <peter.eisentraut@enterprisedb.com>)
|
Список | pgsql-hackers |
Hi all, (Adding Daniel and Jonathan per recent threads) While investigating on what it would take to extend SCRAM to use new hash methods (say like the RFC draft for SCRAM-SHA-512), I have been quickly reminded of the limitations created by SCRAM_KEY_LEN, which is the key length that we use in the HMAC and hash computations when creating a SCRAM verifier or when doing a SASL exchange. Back in v10 when SCRAM was implemented, we have kept the implementation simple and took the decision to rely heavily on buffers with a static size of SCRAM_KEY_LEN during the exchanges. This was a good choice back then, because we did not really have a way to handle errors and there were no need to worry about things like OOMs or even SHA computations errors. This was also incorrect in its own ways, because we failed to go through OpenSSL for the hash/HMAC computations which would be an issue with FIPS certifications. This has led to the introduction of the new cryptohash and HMAC APIs, and the SCRAM code has been changed so as it is able to know and pass through its layers any errors (OOM, hash/MAC computation, OpenSSL issue), as of 87ae969 and e6bdfd9. Taking advantage of all the error stack and logic introduced previously, it becomes rather straight-forward to remove the hardcoded assumptions behind SHA-256 in the SCRAM code path. Attached is a patch that does so: - SCRAM_KEY_LEN is removed from all the internal SCRAM routines, this is replaced by a logic where the hash type and the key length are stored in fe_scram_state for the frontend and scram_state for the backend. - The frontend assigns the hash type and the key length depending on its choice of SASL mechanism in scram_init()@fe-auth-scram.c. - The backend assigns the hash type and the key length based on the parsed SCRAM entry from pg_authid, which works nicely when we need to handle a raw password for a SASL exchange, for example. That's basically what we do now, but scram_state is just changed to work through it. We have currently on HEAD 68 references to SCRAM_KEY_LEN. This brings down these references to 6, that cannot really be avoided because we still need to handle SCRAM-SHA-256 one way or another: - When parsing a SCRAM password from pg_authid (to get a password type or fill in scram_state in the backend). - For the mock authentication, where SHA-256 is forced. We are going to fail anyway, so any hash would be fine as long as we just let the user know about the failure transparently. - When initializing the exchange in libpq based on the SASL mechanism choice. - scram-common.h itself. - When building a verifier in the be-fe interfaces. These could be changed as well but I did not see a point in doing so yet. - SCRAM_KEY_LEN is renamed to SCRAM_SHA_256_KEY_LEN to reflect its dependency to SHA-256. With this patch in place, the internals of SCRAM for SASL are basically able to handle any hash methods. There is much more to consider like how we'd need to treat uaSCRAM (separate code for more hash methods or use the same) or HBA entries, but this removes what I consider to be 70%~80% of the pain in terms of extensibility with the current code, and this is enough to be a submission on its own to move towards more methods. I am planning to tackle more things in terms of pluggability after what's done here, btw :) This patch passes check-world and the CI is green. I have tested as well the patch with SCRAM verifiers coming from a server initially on HEAD, so it looks pretty solid seen from here, being careful of memory leaks in the frontend, mainly. Thoughts or comments? -- Michael
Вложения
В списке pgsql-hackers по дате отправления:
Предыдущее
От: "David G. Johnston"Дата:
Сообщение: Document aggregate functions better w.r.t. ORDER BY