summaryrefslogtreecommitdiffstatshomepage
path: root/src (unfollow)
Commit message (Collapse)AuthorLines
12 daysbackend/lint: Do not shadow certain builtinsWolfgang Müller-39/+39
This commit enables ruff's flake8-builtin linter that emits warnings when builtin functions are shadowed. This is useful for builtins like "dict", "list", or "str" which we use often. Given the nature of this program we historically rely a lot on the usage of "id", "hash", and "filter" as variable names which also shadow Python builtins. For now let's ignore those, we have not used any of them in our code and the impact to the codebase would be considerable. This might be revisited in the future.
12 daysbackend/api: Remove superfluous scalar definitionWolfgang Müller-5/+0
Sadly we couldn't find out what this was meant to do, but we believe it was added in error. All tests pass with this removed, so we can drop it.
12 daysbackend/plugins: Have E-Hentai raise an error from status_code earlyWolfgang Müller-19/+19
This makes the code clearer and saves a whole indentation level.
12 daysbackend/plugins: Catch E-Hentai errors only for relevant linesWolfgang Müller-11/+11
12 daysbackend/plugins: Throw error if E-Hentai response is missing 'gmetadata'Wolfgang Müller-0/+2
It might be that we get a valid (maybe empty) response from the API, in which case we do not want to simply crash because we expect the 'gmetadata' field in the response. Instead, throw a proper ScrapeError for it.
12 daysbackend/plugins: Remove stray apostropheWolfgang Müller-1/+1
12 daysbackend/plugins: Have exhentai assume no censorship for non-hWolfgang Müller-0/+3
Non-H usually has nothing to censor, so this should be a safe default. We have not come across anything where this would have been a false positive.
12 daysbackend/plugins: Use language parser from scraper utilsWolfgang Müller-32/+5
Now that we have this in our utility suite, we can make use of it in the built-in scraper plugins. This increases coverage and removes a lot of duplicate code.
12 daysbackend/scraper: Have collect() ignore None resultsWolfgang Müller-3/+6
If a parser function returned None we yield it regardless, even though it won't have any impact further down the line. Instead clean up the collect() stream as early as possible.
12 daysbackend/tests: Add test for open_archive_fileWolfgang Müller-1/+1
12 daysbackend/plugins: Use "no cover" pragma for consistencyWolfgang Müller-1/+1
13 daysbackend/plugins: Fix MangaDex scraper title formattingWolfgang Müller-1/+1
13 daysbackend/scraper: Add parser methods for LanguageWolfgang Müller-0/+32
We can expect a number of scraper sources to either give languages as ISO 639-3 or as their English name, so it makes sense to implement a simple parser method on our side.
13 daysbackend/lint: Ignore B027 in api/inputs.pyWolfgang Müller-1/+1
Even though our base class here is abstract, this method is not, so we can ignore B027 [1]. [1] https://docs.astral.sh/ruff/rules/empty-method-without-abstract-decorator/
13 daysbackend/lint: Properly chain exceptionsWolfgang Müller-6/+6
This fixes flake8-bugbear's B904 [1]. [1] https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/
13 daysbackend/plugins: Have anchira scraper use parse_dict from scraper utilsWolfgang Müller-11/+2
This cuts down on code duplication and also fixes B023 [1]. [1] https://docs.astral.sh/ruff/rules/function-uses-loop-variable/#function-uses-loop-variable-b023
13 daysbackend/lint: Stop using mutable objects as function argument defaultsWolfgang Müller-3/+12
See https://docs.astral.sh/ruff/rules/mutable-argument-default/
13 daysbackend/lint: Implement pyupgrade suggestionsWolfgang Müller-1/+1
13 daysbackend/lint: Implement flake8-simplify suggestionsWolfgang Müller-19/+17
13 daysbackend/scraper: Bind loop variables correctlyWolfgang Müller-2/+2
This was uncovered by bugbear, but did not seem to have tripped our test. Fix it anyway.
13 daysbackend/lint: Fix import formattingWolfgang Müller-1/+2
2024-03-25backend: Report Archive size as floatWolfgang Müller-1/+1
GraphQL integers are 32-bit as per spec [1] [2]. Implementations may therefore error on large numbers. Since an archive's size can reasonably exceed this value, make sure to report it as a float instead. [1] https://graphql.org/learn/schema/ [2] https://github.com/graphql/graphql-js/issues/292#issuecomment-186702763