Project Suki: Enhancements and fixes (#19323)
* build(gradle)!: Migrate ProjectSuki build.gradle to kotlin dsl * feat: Add PathPattern * feat: Add DataExtractor * feat: Add ProjectSukiAPI * feat: Add ProjectSukiFilters * refactor!: migrate to new API and cleanup extension Completely replace NormalizedURL with HttpUrl, remove PS.kt, PSBook.kt and PSFilters.kt * chore(naming): rename pattern properties to be consistent * refactor(preferences): Centralize and cleanup preferences * chore(preferences): remove Android Preference import * refactor(everything): Fix most of everything Now apk builds, and correctly fetches books, chapters and images, including thumbnails. * revert(gradle): revert to build.gradle.kts to be consistent with other extensions as context receivers are still unusable * feat(url-activity): enhance Needs to be tested, got distracted * feat(preferences): Enhance preferences by providing more robust constructs * feat(filters): Update and enhance filters * feat(site-api): add search request data request and response parse * refactor: replace require and error with reportErrorToUser in PathPattern * refactor(core): Enhance everything Now extension will show browse results on popular, main page on latest, will default to actually-useful search (with naive option on older devices) while still allowing old search. Enhance user interaction by capturing or preventing almost all errors and alerting the user on what went wrong and what to do. * chore: Suppress warnings * docs: Document everything Add documentation and revise pretty much everything. * docs: Add CHANGELOG.md * docs: Add README.md * refactor(search-mode): Combine Naive/Full Site/Strict search options into single filter * revert(manifest): Remove android:icon it's set in the core AndroidManifest.xml * chore(lang): switch extension language to "all" explicitly set id: 8965918600406781666 * fix(preferences): fix blacklisted languages id was the same as whitelisted * fix: Fix bugs and more Change Naive to Simple, provide more understandable description, make it possible to use Simple mode on any Android version if one wishes to do so. Provide better regex for Simple search. Test chapter filtering, download (single chapters and multiple), all searches, chapter view. * docs: Update README and CHANGELOG * refactor(url-activity): Refactor Url Activity from kotlin to java Process kept complaining about java.lang.ClassNotFoundException: kotlin.jvm.internal.Intrinsics * revert(url-activity): Avoid kotlin Intrinsics
This commit is contained in:
parent
3265a2a7c8
commit
d161dafd17
|
@ -2,33 +2,32 @@
|
|||
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
|
||||
<application>
|
||||
<!-- The Activity that will handle intents with URLs pertaining to ProjectSuki -->
|
||||
<activity
|
||||
android:name=".all.projectsuki.ProjectSukiUrlActivity"
|
||||
android:name=".all.projectsuki.ProjectSukiSearchUrlActivity"
|
||||
android:excludeFromRecents="true"
|
||||
android:exported="true"
|
||||
android:theme="@android:style/Theme.NoDisplay">
|
||||
<intent-filter>
|
||||
<!-- see ACTION_DEFAULT/ACTION_VIEW -->
|
||||
<action android:name="android.intent.action.VIEW" />
|
||||
|
||||
<!-- see CATEGORY_DEFAULT -->
|
||||
<category android:name="android.intent.category.DEFAULT" />
|
||||
<!-- see CATEGORY_BROWSABLE -->
|
||||
<category android:name="android.intent.category.BROWSABLE" />
|
||||
|
||||
<data
|
||||
android:host="projectsuki.com"
|
||||
android:pathPattern="/book/..*"
|
||||
android:scheme="https" />
|
||||
</intent-filter>
|
||||
<!-- see https://developer.android.com/guide/topics/manifest/data-element -->
|
||||
<!-- we're not that strict -->
|
||||
<data android:scheme="http" />
|
||||
<data android:scheme="https" />
|
||||
|
||||
<intent-filter>
|
||||
<action android:name="android.intent.action.VIEW" />
|
||||
<!-- We only care about http(s) urls from projectsuki.com -->
|
||||
<data android:host="projectsuki.com" />
|
||||
|
||||
<category android:name="android.intent.category.DEFAULT" />
|
||||
<category android:name="android.intent.category.BROWSABLE" />
|
||||
|
||||
<data
|
||||
android:host="projectsuki.com"
|
||||
android:pathPattern="/read/..*"
|
||||
android:scheme="https" />
|
||||
<!-- Difference between ".*" and "..*" -->
|
||||
<!-- https://stackoverflow.com/a/43396490 -->
|
||||
<data android:pathPattern="/search.*" />
|
||||
</intent-filter>
|
||||
</activity>
|
||||
</application>
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
## Version 1.4.2
|
||||
|
||||
- Improved search feature
|
||||
- New and improved Popular tab
|
||||
- Old Popular tab moved to Latest
|
||||
- Fixed chapter numbering issues when "Chapter" wasn't explicitly present (e.g. "Ch. 2")
|
||||
- Added chapter number inference for when the above fails
|
||||
- Improved user feedback for errors and issues
|
||||
- Fixed wording and clarity on most descriptions
|
||||
- Added simple search option for Android API < 24
|
||||
- Chapter language will now appear right of the scan group
|
||||
- Enhanced chapters sorting (number > group > language)
|
||||
- Changed extension language from English to Multi
|
||||
|
||||
## Version 1.4.1
|
||||
|
||||
First version of the extension:
|
||||
|
||||
- basic functionality
|
||||
- basic search, limited to full-site
|
|
@ -0,0 +1,9 @@
|
|||
# Project Suki
|
||||
|
||||
Go check out our general FAQs and Guides over at
|
||||
[Extension FAQ](https://tachiyomi.org/help/faq/#extensions) or
|
||||
[Getting Started](https://tachiyomi.org/help/guides/getting-started/#installation).
|
||||
|
||||
If you still don't find the answer you're looking for you're welcome to open an
|
||||
[issue](https://github.com/tachiyomiorg/tachiyomi-extensions/issues)
|
||||
and mention [me](https://github.com/npgx/) *in the issue*.
|
|
@ -1,15 +1,16 @@
|
|||
apply plugin: 'com.android.application'
|
||||
apply plugin: 'kotlin-android'
|
||||
apply plugin: 'kotlinx-serialization'
|
||||
|
||||
ext {
|
||||
extName = 'Project Suki'
|
||||
pkgNameSuffix = 'all.projectsuki'
|
||||
extClass = '.ProjectSuki'
|
||||
extVersionCode = 1
|
||||
extVersionCode = 2
|
||||
}
|
||||
|
||||
dependencies {
|
||||
implementation(project(":lib-randomua"))
|
||||
implementation project(":lib-randomua")
|
||||
}
|
||||
|
||||
apply from: "$rootDir/common.gradle"
|
||||
|
|
|
@ -0,0 +1,902 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import okhttp3.HttpUrl
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrl
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrlOrNull
|
||||
import org.jsoup.nodes.Document
|
||||
import org.jsoup.nodes.Element
|
||||
import org.jsoup.select.Elements
|
||||
import java.text.SimpleDateFormat
|
||||
import java.util.Calendar
|
||||
import java.util.Date
|
||||
import java.util.EnumMap
|
||||
import java.util.Locale
|
||||
import java.util.TimeZone
|
||||
|
||||
/**
|
||||
* @see EXTENSION_INFO Found in ProjectSuki.kt
|
||||
*/
|
||||
@Suppress("unused")
|
||||
private inline val INFO: Nothing get() = error("INFO")
|
||||
|
||||
internal typealias BookID = String
|
||||
internal typealias ChapterID = String
|
||||
internal typealias ScanGroup = String
|
||||
|
||||
/**
|
||||
* Gets the thumbnail image for a particular [bookID], [extension] if needed and [size].
|
||||
*
|
||||
* Not all URLs produced by this function might point to a valid asset.
|
||||
*/
|
||||
internal fun bookThumbnailUrl(bookID: BookID, extension: String, size: UInt? = null): HttpUrl {
|
||||
return homepageUrl.newBuilder()
|
||||
.addPathSegment("images")
|
||||
.addPathSegment("gallery")
|
||||
.addPathSegment(bookID)
|
||||
.addPathSegment(
|
||||
when {
|
||||
size == null && extension.isBlank() -> "thumb"
|
||||
size == null -> "thumb.$extension"
|
||||
extension.isBlank() -> "$size-thumb"
|
||||
else -> "$size-thumb.$extension"
|
||||
},
|
||||
)
|
||||
.build()
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds the closest common parent between 2 or more [elements].
|
||||
*
|
||||
* If all [elements] are the same element, it will return the element itself.
|
||||
*
|
||||
* Returns null if the [elements] are not in the same [Document].
|
||||
*/
|
||||
internal fun commonParent(vararg elements: Element): Element? {
|
||||
require(elements.size > 1) { "elements must have more than 1 element" }
|
||||
|
||||
val parents: List<Iterator<Element>> = elements.map { it.parents().reversed().iterator() }
|
||||
var lastCommon: Element? = null
|
||||
|
||||
while (true) {
|
||||
val layer: MutableSet<Element?> = parents.mapTo(HashSet()) {
|
||||
if (it.hasNext()) it.next() else null
|
||||
}
|
||||
if (null in layer) break
|
||||
if (layer.size != 1) break
|
||||
lastCommon = layer.single()
|
||||
}
|
||||
|
||||
return lastCommon
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple Utility class that represents a switching point between 2 patterns given by a certain predicate (see [switchingPoints]).
|
||||
*
|
||||
* For example in the sequence 111001 there are 2 switching points,
|
||||
* the first one is 10, at indexes 2 and 3,
|
||||
* and the second one is 01 at indexes 4 and 5.
|
||||
*
|
||||
* Both indexes and states are given for absolute clarity.
|
||||
*/
|
||||
internal data class SwitchingPoint(val left: Int, val right: Int, val leftState: Boolean, val rightState: Boolean) {
|
||||
init {
|
||||
if (left + 1 != right) {
|
||||
reportErrorToUser {
|
||||
"invalid SwitchingPoint: ($left, $right)"
|
||||
}
|
||||
}
|
||||
if (leftState == rightState) {
|
||||
reportErrorToUser {
|
||||
"invalid SwitchingPoint: ($leftState, $rightState)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Function that will return all [SwitchingPoint]s in a certain sequence.
|
||||
*/
|
||||
internal fun <E> Iterable<E>.switchingPoints(predicate: (E) -> Boolean): List<SwitchingPoint> {
|
||||
val iterator = iterator()
|
||||
if (!iterator.hasNext()) return emptyList()
|
||||
|
||||
val points: MutableList<SwitchingPoint> = ArrayList()
|
||||
var state: Boolean = predicate(iterator.next())
|
||||
var index = 1
|
||||
for (element in iterator) {
|
||||
val p = predicate(element)
|
||||
if (state != p) {
|
||||
points.add(SwitchingPoint(left = index - 1, right = index, leftState = state, rightState = p))
|
||||
state = p
|
||||
}
|
||||
index++
|
||||
}
|
||||
|
||||
return points
|
||||
}
|
||||
|
||||
/**
|
||||
* Utility class that can extract and format data from a certain [extractionElement].
|
||||
*
|
||||
* Note that a [Document] is also an [Element].
|
||||
*
|
||||
* The given [extractionElement] must have an [ownerDocument][Element.ownerDocument] with a valid absolute
|
||||
* [location][Document.location] (according to [toHttpUrl]).
|
||||
*
|
||||
* [Lazy] properties are used to allow for the extraction process to happen only once
|
||||
* (and for thread safety, see [LazyThreadSafetyMode], [lazy]).
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
@Suppress("MemberVisibilityCanBePrivate")
|
||||
class DataExtractor(val extractionElement: Element) {
|
||||
|
||||
private val url: HttpUrl = extractionElement.ownerDocument()?.location()?.toHttpUrlOrNull() ?: reportErrorToUser {
|
||||
buildString {
|
||||
append("DataExtractor class requires a \"from\" element ")
|
||||
append("that possesses an owner document with a valid absolute location(), but ")
|
||||
append(extractionElement.ownerDocument()?.location())
|
||||
append(" was found!")
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* All [anchor](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a) tags
|
||||
* that have a valid url in the [href](https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/href)
|
||||
* [attribute](https://developer.mozilla.org/en-US/docs/Glossary/Attribute).
|
||||
*
|
||||
* To understand the [Element.select] methods, see [CSS selectors](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_selectors)
|
||||
* and how to use them [to select DOM elements](https://developer.mozilla.org/en-US/docs/Web/API/Document_object_model/Locating_DOM_elements_using_selectors).
|
||||
*
|
||||
* JSoup's [Element.attr] methods supports the special `abs:<attribute>` syntax when working with relative URLs.
|
||||
* It is simply a shortcut to [Element.absUrl], which uses [Document.baseUri].
|
||||
*/
|
||||
val allHrefAnchors: Map<Element, HttpUrl> by lazy {
|
||||
buildMap {
|
||||
extractionElement.select("a[href]").forEach { a ->
|
||||
val href = a.attr("abs:href")
|
||||
if (href.isNotBlank()) {
|
||||
href.toHttpUrlOrNull()
|
||||
?.let { this[a] = it }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Filters [allHrefAnchors] for urls that satisfy `url.host.endsWith(homepageUrl.host)`.
|
||||
*
|
||||
* Meaning this property contains only elements that redirect to a Project Suki URL.
|
||||
*/
|
||||
val psHrefAnchors: Map<Element, HttpUrl> by lazy {
|
||||
allHrefAnchors.filterValues { url ->
|
||||
url.host.endsWith(homepageUrl.host)
|
||||
}
|
||||
}
|
||||
|
||||
/** Utility class that represents a "book" element, identifier by the [bookID]. */
|
||||
data class PSBook(val thumbnail: HttpUrl, val rawTitle: String, val bookUrl: HttpUrl, val bookID: BookID) {
|
||||
override fun equals(other: Any?) = other is PSBook && this.bookID == other.bookID
|
||||
override fun hashCode() = bookID.hashCode()
|
||||
}
|
||||
|
||||
/**
|
||||
* This property contains all the [books][PSBook] contained in the [extractionElement].
|
||||
*
|
||||
* Extraction is done by first obtaining all [psHrefAnchors], and using some heuristics
|
||||
* to find the [PSBook.rawTitle] and [PSBook.thumbnail]'s extension.
|
||||
*
|
||||
* Other extensions might use CSS Selectors (see [DataExtractor]) to find these values in a fixed structure.
|
||||
* But because [Project Suki](https://projectsuki.com) seems to be done by hand using [Bootstrap](https://getbootstrap.com/),
|
||||
* it has a much more volatile structure.
|
||||
*
|
||||
* To make it possible to maintain this extension, data extraction is done by finding all elements in the page that redirect to
|
||||
* book entries, and using generalized heuristics that should be robust to some types of changes.
|
||||
* This has the disadvantage of making distinguishing between the different elements in a single page a nightmare,
|
||||
* but luckly we don't need to do that for the purposes of a Tachiyomi extension.
|
||||
*/
|
||||
val books: Set<PSBook> by lazy {
|
||||
buildSet {
|
||||
data class BookUrlContainerElement(val container: Element, val href: HttpUrl, val matchResult: PathMatchResult)
|
||||
|
||||
psHrefAnchors.entries
|
||||
.map { (element, href) -> BookUrlContainerElement(element, href, href.matchAgainst(bookUrlPattern)) }
|
||||
.filter { it.matchResult.doesMatch }
|
||||
.groupBy { it.matchResult["bookid"]!!.value }
|
||||
.forEach { (bookID: BookID, containers: List<BookUrlContainerElement>) ->
|
||||
|
||||
val extension: String = containers.asSequence()
|
||||
.flatMap { it.container.select("img") }
|
||||
.mapNotNull { it.imageSrc() }
|
||||
.map { it.matchAgainst(thumbnailUrlPattern) }
|
||||
.filter { it.doesMatch }
|
||||
.firstOrNull()
|
||||
?.get("thumbextension")
|
||||
?.value ?: ""
|
||||
|
||||
val title: String = containers.asSequence()
|
||||
.map { it.container }
|
||||
.filter { it.select("img").isEmpty() }
|
||||
.filter { it.parents().none { p -> p.tag().normalName() == "small" } }
|
||||
.map { it.ownText() }
|
||||
.filter { !it.equals("show more", ignoreCase = true) }
|
||||
.firstOrNull() ?: reportErrorToUser { "Could not determine title for $bookID" }
|
||||
|
||||
add(
|
||||
PSBook(
|
||||
thumbnail = bookThumbnailUrl(bookID, extension),
|
||||
rawTitle = title,
|
||||
bookUrl = homepageUrl.newBuilder()
|
||||
.addPathSegment("book")
|
||||
.addPathSegment(bookID)
|
||||
.build(),
|
||||
bookID = bookID,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Utility class that extends [PSBook], by providing a [detailsTable], [alertData] and [description]. */
|
||||
data class PSBookDetails(
|
||||
val book: PSBook,
|
||||
val detailsTable: EnumMap<BookDetail, String>,
|
||||
val alertData: List<String>,
|
||||
val description: String,
|
||||
) {
|
||||
override fun equals(other: Any?) = other is PSBookDetails && this.book == other.book
|
||||
override fun hashCode() = book.hashCode()
|
||||
}
|
||||
|
||||
/**
|
||||
* Represents a plethora of possibly-present data about some book.
|
||||
*
|
||||
* The process for extracting the details is described in the KDoc for [bookDetails].
|
||||
*/
|
||||
@Suppress("RegExpUnnecessaryNonCapturingGroup")
|
||||
enum class BookDetail(val display: String, val regex: Regex, val elementProcessor: (Element) -> String = { it.text() }) {
|
||||
ALT_TITLE("Alt titles:", """(?:alternative|alt\.?) titles?:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
AUTHOR("Authors:", """authors?:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
ARTIST("Artists:", """artists?:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
STATUS("Status:", """status:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
ORIGIN("Origin:", """origin:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
RELEASE_YEAR("Release year:", """release(?: year):?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
USER_RATING(
|
||||
"User rating:",
|
||||
"""user ratings?:?""".toRegex(RegexOption.IGNORE_CASE),
|
||||
elementProcessor = { ratings ->
|
||||
val rates = when {
|
||||
ratings.id() != "ratings" -> 0
|
||||
else -> ratings.children().count { it.hasClass("text-warning") }
|
||||
}
|
||||
|
||||
when (rates) {
|
||||
in 1..5 -> "$rates/5"
|
||||
else -> "?/5"
|
||||
}
|
||||
},
|
||||
),
|
||||
VIEWS("Views:", """views?:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
OFFICIAL("Official:", """official:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
PURCHASE("Purchase:", """purchase:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
GENRE("Genres:", """genre(?:\(s\))?:?""".toRegex(RegexOption.IGNORE_CASE)),
|
||||
;
|
||||
|
||||
companion object {
|
||||
private val values = values().toList()
|
||||
fun from(type: String): BookDetail? = values.firstOrNull { it.regex.matches(type) }
|
||||
}
|
||||
}
|
||||
|
||||
/** Used to detect visible/invisible alerts. */
|
||||
private val displayNoneRegex = """display: ?none;?""".toRegex(RegexOption.IGNORE_CASE)
|
||||
|
||||
/**
|
||||
* All [details][PSBookDetails] are extracted from a table-like list of `<div>` elements,
|
||||
* found in the book main page, using generalized heuristics:
|
||||
*
|
||||
* First the algorithm looks for known entries in the "table" by looking for
|
||||
* the [Status][BookDetail.STATUS] and [Origin][BookDetail.ORIGIN] fields.
|
||||
* This is possible because these elements redirect to the [search](https://projectsuki.com/search)
|
||||
* page with "status" and "origin" queries.
|
||||
*
|
||||
* The [commonParent] between the two elements is found and the table is subsequently analyzed.
|
||||
* If this method fails, at least the [Author][BookDetail.AUTHOR], [Artist][BookDetail.ARTIST] and [Genre][BookDetail.GENRE]
|
||||
* details are found via URLs.
|
||||
*
|
||||
* An extra [Genre][BookDetail.GENRE] is added when possible:
|
||||
* - Origin: "kr" -> Genre: "Manhwa"
|
||||
* - Origin: "cn" -> Genre: "Manhua"
|
||||
* - Origin: "jp" -> Genre: "Manga"
|
||||
*
|
||||
* The book title, description and alerts are also found in similar ways.
|
||||
*
|
||||
* The description is expanded with all this information too.
|
||||
*/
|
||||
val bookDetails: PSBookDetails by lazy {
|
||||
val match = url.matchAgainst(bookUrlPattern)
|
||||
if (!match.doesMatch) reportErrorToUser { "cannot extract book details: $url" }
|
||||
val bookID = match["bookid"]!!.value
|
||||
|
||||
val authors: Map<Element, HttpUrl> = psHrefAnchors.filter { (_, url) ->
|
||||
url.queryParameterNames.contains("author")
|
||||
}
|
||||
|
||||
val artists: Map<Element, HttpUrl> = psHrefAnchors.filter { (_, url) ->
|
||||
url.queryParameterNames.contains("artist")
|
||||
}
|
||||
|
||||
val status: Map.Entry<Element, HttpUrl> = psHrefAnchors.entries.single { (_, url) ->
|
||||
url.queryParameterNames.contains("status")
|
||||
}
|
||||
|
||||
val origin: Map.Entry<Element, HttpUrl> = psHrefAnchors.entries.single { (_, url) ->
|
||||
url.queryParameterNames.contains("origin")
|
||||
}
|
||||
|
||||
val genres: Map<Element, HttpUrl> = psHrefAnchors.filter { (_, url) ->
|
||||
url.matchAgainst(genreSearchUrlPattern).doesMatch
|
||||
}
|
||||
|
||||
val details = EnumMap<BookDetail, String>(BookDetail::class.java)
|
||||
val tableParent: Element? = commonParent(status.key, origin.key)
|
||||
val rows: List<Element>? = tableParent?.children()?.toList()
|
||||
|
||||
for (row in (rows ?: emptyList())) {
|
||||
val cols = row.children()
|
||||
val typeElement = cols.getOrNull(0) ?: continue
|
||||
val valueElement = cols.getOrNull(1) ?: continue
|
||||
|
||||
val typeText = typeElement.text()
|
||||
val detail = BookDetail.from(typeText) ?: continue
|
||||
|
||||
details[detail] = detail.elementProcessor(valueElement)
|
||||
}
|
||||
|
||||
details.getOrPut(BookDetail.AUTHOR) { authors.keys.joinToString(", ") { it.text() } }
|
||||
details.getOrPut(BookDetail.ARTIST) { artists.keys.joinToString(", ") { it.text() } }
|
||||
details.getOrPut(BookDetail.STATUS) { status.key.text() }
|
||||
details.getOrPut(BookDetail.ORIGIN) { origin.key.text() }
|
||||
|
||||
details.getOrPut(BookDetail.GENRE) { genres.keys.joinToString(", ") { it.text() } }
|
||||
|
||||
when (origin.value.queryParameter("origin")) {
|
||||
"kr" -> "Manhwa"
|
||||
"cn" -> "Manhua"
|
||||
"jp" -> "Manga"
|
||||
else -> null
|
||||
}?.let { originGenre ->
|
||||
details[BookDetail.GENRE] = """${details[BookDetail.GENRE]}, $originGenre"""
|
||||
}
|
||||
|
||||
val title: Element? = extractionElement.selectFirst("h2[itemprop=title]") ?: extractionElement.selectFirst("h2") ?: run {
|
||||
// the common table is inside of a "row" wrapper that is the neighbour of the h2 containing the title
|
||||
// if we sort of generalize this, the title should be the first
|
||||
// text-node-bearing child of the table's grandparent
|
||||
tableParent?.parent()?.parent()?.children()?.firstOrNull { it.textNodes().isNotEmpty() }
|
||||
}
|
||||
|
||||
val alerts: List<String> = extractionElement.select(".alert, .alert-info")
|
||||
.asSequence()
|
||||
.filter { !it.attr("style").contains(displayNoneRegex) }
|
||||
.filter { alert -> alert.parents().none { it.attr("style").contains(displayNoneRegex) } }
|
||||
.map { alert ->
|
||||
buildString {
|
||||
var appendedSomething = false
|
||||
alert.select("h4").singleOrNull()?.let {
|
||||
appendLine(it.wholeText())
|
||||
appendedSomething = true
|
||||
}
|
||||
alert.select("p").singleOrNull()?.let {
|
||||
appendLine(it.wholeText())
|
||||
appendedSomething = true
|
||||
}
|
||||
if (!appendedSomething) {
|
||||
appendLine(alert.wholeText())
|
||||
}
|
||||
}
|
||||
}
|
||||
.toList()
|
||||
|
||||
val description = extractionElement.selectFirst("#descriptionCollapse")
|
||||
?.wholeText() ?: extractionElement.select(".description")
|
||||
.joinToString("\n\n", postfix = "\n") { it.wholeText() }
|
||||
|
||||
val extension = extractionElement.select("img")
|
||||
.asSequence()
|
||||
.mapNotNull { e -> e.imageSrc()?.let { e to it } }
|
||||
.map { (img, src) -> img to src.matchAgainst(thumbnailUrlPattern) }
|
||||
.filter { (_, match) -> match.doesMatch }
|
||||
.firstOrNull()
|
||||
?.second
|
||||
?.get("thumbextension")
|
||||
?.value ?: ""
|
||||
|
||||
PSBookDetails(
|
||||
book = PSBook(
|
||||
bookThumbnailUrl(bookID, extension),
|
||||
title?.text() ?: reportErrorToUser { "could not determine book title from details for $bookID" },
|
||||
url,
|
||||
bookID,
|
||||
),
|
||||
detailsTable = details,
|
||||
alertData = alerts,
|
||||
description = description,
|
||||
)
|
||||
}
|
||||
|
||||
/** Represents some data type that a certain column in the chapters table represents. */
|
||||
sealed class ChaptersTableColumnDataType(val required: Boolean) {
|
||||
|
||||
/** @return true if this data type is represented by a column's raw title. */
|
||||
abstract fun isRepresentedBy(from: String): Boolean
|
||||
|
||||
/** Represents the chapter's title, which also normally includes the chapter number. */
|
||||
/*data*/ object Chapter : ChaptersTableColumnDataType(required = true) {
|
||||
private val chapterHeaderRegex = """chapters?""".toRegex(RegexOption.IGNORE_CASE)
|
||||
override fun isRepresentedBy(from: String): Boolean = from.matches(chapterHeaderRegex)
|
||||
}
|
||||
|
||||
/** Represents the chapter's scan group. */
|
||||
/*data*/ object Group : ChaptersTableColumnDataType(required = true) {
|
||||
private val groupHeaderRegex = """groups?""".toRegex(RegexOption.IGNORE_CASE)
|
||||
override fun isRepresentedBy(from: String): Boolean = from.matches(groupHeaderRegex)
|
||||
}
|
||||
|
||||
/** Represents the chapter's release date (when it was added to the site). */
|
||||
/*data*/ object Added : ChaptersTableColumnDataType(required = true) {
|
||||
private val dateHeaderRegex = """added|date""".toRegex(RegexOption.IGNORE_CASE)
|
||||
override fun isRepresentedBy(from: String): Boolean = from.matches(dateHeaderRegex)
|
||||
}
|
||||
|
||||
/** Represents the chapter's language. */
|
||||
/*data*/ object Language : ChaptersTableColumnDataType(required = false) {
|
||||
private val languageHeaderRegex = """language""".toRegex(RegexOption.IGNORE_CASE)
|
||||
override fun isRepresentedBy(from: String): Boolean = from.matches(languageHeaderRegex)
|
||||
}
|
||||
|
||||
/** Represents the chapter's view count. */
|
||||
/*data*/ object Views : ChaptersTableColumnDataType(required = false) {
|
||||
@Suppress("RegExpUnnecessaryNonCapturingGroup")
|
||||
private val languageHeaderRegex = """views?(?:\s*count)?""".toRegex(RegexOption.IGNORE_CASE)
|
||||
override fun isRepresentedBy(from: String): Boolean = from.matches(languageHeaderRegex)
|
||||
}
|
||||
|
||||
companion object {
|
||||
val all: Set<ChaptersTableColumnDataType> by lazy { setOf(Chapter, Group, Added, Language, Views) }
|
||||
val required: Set<ChaptersTableColumnDataType> by lazy { all.filterTo(LinkedHashSet()) { it.required } }
|
||||
|
||||
/**
|
||||
* Takes the list of [headers] and returns a map that
|
||||
* represents which data type is contained in which column index.
|
||||
*
|
||||
* Not all column indexes might be present if some column isn't recognised as a data type listed above.
|
||||
*/
|
||||
fun extractDataTypes(headers: List<Element>): Map<ChaptersTableColumnDataType, Int> {
|
||||
return buildMap {
|
||||
headers.map { it.text() }
|
||||
.forEachIndexed { columnIndex, columnHeaderText ->
|
||||
all.forEach { dataType ->
|
||||
if (dataType.isRepresentedBy(columnHeaderText)) {
|
||||
put(dataType, columnIndex)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Represents a book's chapter. */
|
||||
data class BookChapter(
|
||||
val chapterUrl: HttpUrl,
|
||||
val chapterMatchResult: PathMatchResult,
|
||||
val chapterTitle: String,
|
||||
val chapterNumber: ChapterNumber?,
|
||||
val chapterGroup: ScanGroup,
|
||||
val chapterDateAdded: Date?,
|
||||
val chapterLanguage: String,
|
||||
) {
|
||||
|
||||
@Suppress("unused")
|
||||
val bookID: BookID = chapterMatchResult["bookid"]!!.value
|
||||
|
||||
@Suppress("unused")
|
||||
val chapterID: ChapterID = chapterMatchResult["chapterid"]!!.value
|
||||
}
|
||||
|
||||
/**
|
||||
* This property contains all the [BookChapter]s contained in the [extractionElement], grouped by the [ScanGroup].
|
||||
*
|
||||
* The extraction proceeds by first finding all `<table>` elements and then progressively refines
|
||||
* the extracted data to remove false positives, combining all the extracted data and removing duplicates at the end.
|
||||
*
|
||||
* The `<thead>` element is analyzed to find the corresponding data types, this is resistant to shuffles
|
||||
* (e.g. if the Chapter and Language columns are swapped, this will work anyways).
|
||||
*
|
||||
* Then the `<tbody>` rows (`<tr>`) are one by one processed to find the ones that match the column (`<td>`)
|
||||
* size and data type positions that we care about.
|
||||
*/
|
||||
val bookChapters: Map<ScanGroup, List<BookChapter>> by lazy {
|
||||
data class RawTable(val self: Element, val thead: Element, val tbody: Element)
|
||||
data class AnalyzedTable(val raw: RawTable, val columnDataTypes: Map<ChaptersTableColumnDataType, Int>, val dataRows: List<Elements>)
|
||||
|
||||
val allChaptersByGroup: MutableMap<ScanGroup, MutableList<BookChapter>> = extractionElement.select("table")
|
||||
.asSequence()
|
||||
.mapNotNull { tableElement ->
|
||||
tableElement.selectFirst("thead")?.let { thead ->
|
||||
tableElement.selectFirst("tbody")?.let { tbody ->
|
||||
RawTable(tableElement, thead, tbody)
|
||||
}
|
||||
}
|
||||
}
|
||||
.mapNotNull { rawTable ->
|
||||
val (_: Element, theadElement: Element, tbodyElement: Element) = rawTable
|
||||
|
||||
val columnDataTypes: Map<ChaptersTableColumnDataType, Int> = theadElement.select("tr").asSequence()
|
||||
.mapNotNull { headerRow ->
|
||||
ChaptersTableColumnDataType.extractDataTypes(headers = headerRow.select("td"))
|
||||
.takeIf { it.keys.containsAll(ChaptersTableColumnDataType.required) }
|
||||
}
|
||||
.firstOrNull() ?: return@mapNotNull null
|
||||
|
||||
val dataRows: List<Elements> = tbodyElement.select("tr")
|
||||
.asSequence()
|
||||
.map { it.children() }
|
||||
.filter { it.size == columnDataTypes.size }
|
||||
.toList()
|
||||
|
||||
AnalyzedTable(rawTable, columnDataTypes, dataRows)
|
||||
}
|
||||
.map { analyzedTable ->
|
||||
val (_: RawTable, columnDataTypes: Map<ChaptersTableColumnDataType, Int>, dataRows: List<Elements>) = analyzedTable
|
||||
|
||||
val rawData: List<Map<ChaptersTableColumnDataType, Element>> = dataRows.map { row ->
|
||||
columnDataTypes.mapValues { (_, columnIndex) ->
|
||||
row[columnIndex]
|
||||
}
|
||||
}
|
||||
|
||||
val rawByGroup: Map<ScanGroup, List<Map<ChaptersTableColumnDataType, Element>>> = rawData.groupBy { data ->
|
||||
data[ChaptersTableColumnDataType.Group]!!.text()
|
||||
}
|
||||
|
||||
val chaptersByGroup: Map<ScanGroup, List<BookChapter>> = rawByGroup.mapValues { (groupName, chapters: List<Map<ChaptersTableColumnDataType, Element>>) ->
|
||||
chapters.map { data: Map<ChaptersTableColumnDataType, Element> ->
|
||||
val chapterElement: Element = data[ChaptersTableColumnDataType.Chapter]!!
|
||||
val addedElement: Element = data[ChaptersTableColumnDataType.Added]!!
|
||||
val languageElement: Element? = data[ChaptersTableColumnDataType.Language]
|
||||
// val viewsElement = data[ChaptersTableColumnDataType.Views]
|
||||
|
||||
val chapterUrl: HttpUrl = (chapterElement.selectFirst("a[href]") ?: reportErrorToUser { "Could not determine chapter url for ${chapterElement.text()}" })
|
||||
.attr("abs:href")
|
||||
.toHttpUrl()
|
||||
val chapterUrlMatch: PathMatchResult = chapterUrl.matchAgainst(chapterUrlPattern)
|
||||
|
||||
val chapterNumber: ChapterNumber? = chapterElement.text().tryAnalyzeChapterNumber()
|
||||
val dateAdded: Date? = addedElement.text().tryAnalyzeChapterDate()
|
||||
val chapterLanguage: String = languageElement?.text()?.trim()?.lowercase(Locale.US) ?: UNKNOWN_LANGUAGE
|
||||
|
||||
BookChapter(
|
||||
chapterUrl = chapterUrl,
|
||||
chapterMatchResult = chapterUrlMatch,
|
||||
chapterTitle = chapterElement.text(),
|
||||
chapterNumber = chapterNumber,
|
||||
chapterGroup = groupName,
|
||||
chapterDateAdded = dateAdded,
|
||||
chapterLanguage = chapterLanguage,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
chaptersByGroup
|
||||
}
|
||||
.map { chaptersByGroup ->
|
||||
chaptersByGroup.mapValues { (_, chapters) ->
|
||||
chapters.tryInferMissingChapterNumbers()
|
||||
}
|
||||
}
|
||||
.fold(LinkedHashMap()) { map, next ->
|
||||
map.apply {
|
||||
next.forEach { (group, chapters) ->
|
||||
getOrPut(group) { ArrayList() }.addAll(chapters)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
allChaptersByGroup
|
||||
}
|
||||
|
||||
/**
|
||||
* Utility class that represents a chapter number.
|
||||
*
|
||||
* Ordering is implemented in the way a human would most likely expect chapters to be ordered,
|
||||
* e.g. chapter 10.15 comes after chapter 10.9
|
||||
*/
|
||||
data class ChapterNumber(val main: UInt, val sub: UInt) : Comparable<ChapterNumber> {
|
||||
override fun compareTo(other: ChapterNumber): Int = comparator.compare(this, other)
|
||||
|
||||
companion object {
|
||||
val comparator: Comparator<ChapterNumber> by lazy { compareBy({ it.main }, { it.sub }) }
|
||||
val chapterNumberRegex: Regex = """(?:chapter|ch\.?)\s*(\d+)(?:\s*[.,-]\s*(\d+)?)?""".toRegex(RegexOption.IGNORE_CASE)
|
||||
}
|
||||
}
|
||||
|
||||
/** Tries to infer the chapter number from the raw title. */
|
||||
private fun String.tryAnalyzeChapterNumber(): ChapterNumber? {
|
||||
return ChapterNumber.chapterNumberRegex
|
||||
.find(this)
|
||||
?.let { simpleMatchResult ->
|
||||
val main: UInt = simpleMatchResult.groupValues[1].toUInt()
|
||||
val sub: UInt = simpleMatchResult.groupValues[2].takeIf { it.isNotBlank() }?.toUInt() ?: 0u
|
||||
|
||||
ChapterNumber(main, sub)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Represents an index where the chapter number is unknown and
|
||||
* whether or not the previous (above, next numerical chapter)
|
||||
* or next (below, previous numerical chapter) chapter numbers
|
||||
* are known.
|
||||
*
|
||||
* Requires [aboveIsKnown] or [belowIsKnown] to be true (or both).
|
||||
*/
|
||||
data class MissingChapterNumberEdge(val index: Int, val aboveIsKnown: Boolean, val belowIsKnown: Boolean) {
|
||||
init {
|
||||
require(aboveIsKnown || belowIsKnown) { "previous or next index must be known (or both)" }
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Chapter titles usually contain "Chapter xx" or "Ch. xx", but to provide some way to patch
|
||||
* eventual holes (which happened before with "Ch." which wasn't accounted for), this method is provided.
|
||||
*
|
||||
* The algorithm tries to infer the chapter numbers by using correctly
|
||||
* inferred zones and expanding them.
|
||||
*
|
||||
* The theoretical behaviour of this algorithm can easily be represented by
|
||||
* using + for known and - for unknown chapter numbers
|
||||
* (think of a 1D cellular automaton with very simple rules).
|
||||
* An example (coarse) timeline could look like this:
|
||||
* ```
|
||||
* -++--++---+-+++--
|
||||
* ++++++++-+++++++-
|
||||
* +++++++++++++++++
|
||||
* ```
|
||||
* The actual changes always happen in a loop-like behaviour from left to right.
|
||||
* We can use this to our advantage.
|
||||
*
|
||||
* Inference is done on a best-guess basis based on neighbouring values.
|
||||
* Reporting to the user is preferred to avoid providing weird values.
|
||||
*/
|
||||
private fun List<BookChapter>.tryInferMissingChapterNumbers(): List<BookChapter> {
|
||||
if (isEmpty()) return emptyList()
|
||||
|
||||
val switchingPoints: List<SwitchingPoint> = switchingPoints { it.chapterNumber != null }
|
||||
val missingChapterNumberEdges: ArrayDeque<MissingChapterNumberEdge> = ArrayDeque()
|
||||
|
||||
when {
|
||||
switchingPoints.isEmpty() && first().chapterNumber == null -> {
|
||||
// oh dear, nothing is known
|
||||
reportErrorToUser { "No chapter numbers could be inferred!" }
|
||||
}
|
||||
|
||||
switchingPoints.isEmpty() /* && first().chapterNumber != null */ -> {
|
||||
// all are known
|
||||
return this
|
||||
}
|
||||
}
|
||||
|
||||
// convert switching points into an easier-to-handle format
|
||||
switchingPoints.forEach { (left, right, leftIsKnown, rightIsKnown) ->
|
||||
when {
|
||||
leftIsKnown && !rightIsKnown -> {
|
||||
// going from known to unknown in top to bottom direction
|
||||
// chapters go in inverse order, so top is last, bottom is first
|
||||
// left is top, right is bottom.
|
||||
// subject of discussion is the right one (the unknown).
|
||||
// this is the simpler case because we're going from known numbers
|
||||
// to unknown.
|
||||
missingChapterNumberEdges.add(MissingChapterNumberEdge(right, aboveIsKnown = true, belowIsKnown = false))
|
||||
}
|
||||
|
||||
else -> {
|
||||
// SwitchingPoint contract's guarantees: leftIsKnown = false, rightIsKnown = true
|
||||
|
||||
// we were on "unknown" territory, and going to known
|
||||
// subject of discussion is the left one (the unknown).
|
||||
// there is a special case in which the unknown chapter is only one
|
||||
// with known numbers in both directions.
|
||||
// we need to account for that by checking if the last added member
|
||||
// of missingChapterNumberEdges (if any) has index equal to "left" element
|
||||
// (the subject, unknown)
|
||||
// in which case we replace it, with a bi-directional MissingChapterNumberEdge
|
||||
val last: MissingChapterNumberEdge? = missingChapterNumberEdges.lastOrNull()
|
||||
when (last?.index == left) {
|
||||
true -> {
|
||||
// surrounded, replace
|
||||
missingChapterNumberEdges[missingChapterNumberEdges.lastIndex] = MissingChapterNumberEdge(left, aboveIsKnown = true, belowIsKnown = true)
|
||||
}
|
||||
|
||||
else -> {
|
||||
// 2 or more unknown sequence
|
||||
missingChapterNumberEdges.add(MissingChapterNumberEdge(left, aboveIsKnown = false, belowIsKnown = true))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// previous chapter number
|
||||
fun ChapterNumber.predictBelow(): ChapterNumber = when (sub) {
|
||||
0u -> ChapterNumber(main - 1u, 0u) // before chapter 18, chapter 17
|
||||
5u -> ChapterNumber(main, 0u) // before chapter 18.5, chapter 18
|
||||
else -> ChapterNumber(main, sub - 1u) // before chapter 18.4, chapter 18.3
|
||||
}
|
||||
|
||||
// next chapter number
|
||||
fun ChapterNumber.predictAbove(): ChapterNumber = when (sub) {
|
||||
0u, 5u -> ChapterNumber(main + 1u, 0u) // after chapter 17 or 17.5, chapter 18
|
||||
else -> ChapterNumber(main, sub + 1u) // after chapter 18.3, 18.4
|
||||
}
|
||||
|
||||
fun MissingChapterNumberEdge.indexAbove(): Int = index - 1
|
||||
fun MissingChapterNumberEdge.indexBelow(): Int = index + 1
|
||||
|
||||
val result: MutableList<BookChapter> = ArrayList(this)
|
||||
while (missingChapterNumberEdges.isNotEmpty()) {
|
||||
val edge: MissingChapterNumberEdge = missingChapterNumberEdges.removeFirst()
|
||||
|
||||
when {
|
||||
edge.aboveIsKnown && edge.belowIsKnown -> {
|
||||
// both are known
|
||||
val above: BookChapter = result[edge.indexAbove()]
|
||||
val below: BookChapter = result[edge.indexBelow()]
|
||||
|
||||
val inferredByDecreasing = above.chapterNumber!!.predictBelow()
|
||||
val inferredByIncreasing = below.chapterNumber!!.predictAbove()
|
||||
|
||||
when {
|
||||
above.chapterNumber == below.chapterNumber -> {
|
||||
reportErrorToUser { "Chapter number inference failed (case 0)!" }
|
||||
}
|
||||
|
||||
above.chapterNumber < below.chapterNumber -> {
|
||||
reportErrorToUser { "Chapter number inference failed (case 1)!" }
|
||||
}
|
||||
|
||||
inferredByDecreasing == inferredByIncreasing -> {
|
||||
// inference agrees from both sides
|
||||
result[edge.index] = result[edge.index].copy(chapterNumber = inferredByDecreasing)
|
||||
}
|
||||
|
||||
// might be handled by above, just for safety
|
||||
inferredByIncreasing >= above.chapterNumber || inferredByDecreasing <= below.chapterNumber -> {
|
||||
reportErrorToUser { "Chapter number inference failed (case 2)!" }
|
||||
}
|
||||
|
||||
inferredByDecreasing > inferredByIncreasing -> {
|
||||
// gap between chapters, take the lowest
|
||||
result[edge.index] = result[edge.index].copy(chapterNumber = inferredByIncreasing)
|
||||
}
|
||||
|
||||
else -> {
|
||||
// inferredByIncreasing > inferredByDecreasing should be handled by branch 2 above
|
||||
// everything else should be reported to user
|
||||
reportErrorToUser { "Chapter number inference failed (case 3)!" }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
edge.aboveIsKnown -> {
|
||||
// only above is known
|
||||
val above: BookChapter = result[edge.indexAbove()]
|
||||
val inferredByDecreasing = above.chapterNumber!!.predictBelow()
|
||||
|
||||
// handle this one
|
||||
result[edge.index] = result[edge.index].copy(chapterNumber = inferredByDecreasing)
|
||||
|
||||
// there are 2 main cases, where + is known, - is unknown, * just changed above and . is anything
|
||||
// case 1: ..+*-+..
|
||||
// case 2: ..+*--..
|
||||
when (missingChapterNumberEdges.firstOrNull()?.index == edge.index + 1) {
|
||||
true -> {
|
||||
// replace next edge with surrounded
|
||||
val removed = missingChapterNumberEdges.removeFirst()
|
||||
missingChapterNumberEdges.addFirst(removed.copy(aboveIsKnown = true, belowIsKnown = false))
|
||||
}
|
||||
|
||||
false -> {
|
||||
// add new edge below current edge's index
|
||||
missingChapterNumberEdges.addLast(MissingChapterNumberEdge(edge.indexBelow(), aboveIsKnown = true, belowIsKnown = false))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
edge.belowIsKnown -> {
|
||||
// only below is known
|
||||
val below: BookChapter = result[edge.index + 1]
|
||||
val inferredByIncreasing = below.chapterNumber!!.predictAbove()
|
||||
|
||||
// handle this one
|
||||
result[edge.index] = result[edge.index].copy(chapterNumber = inferredByIncreasing)
|
||||
|
||||
// there are 2 main cases (like see above):
|
||||
// case 1: ..+-*+..
|
||||
// case 2: ..--*+..
|
||||
when (missingChapterNumberEdges.lastOrNull()?.index == edge.index - 1) {
|
||||
true -> {
|
||||
// replace last edge with surrounded
|
||||
val removed = missingChapterNumberEdges.removeLast()
|
||||
missingChapterNumberEdges.addLast(removed.copy(aboveIsKnown = true, belowIsKnown = true))
|
||||
}
|
||||
|
||||
false -> {
|
||||
// add new edge above current edge's index
|
||||
missingChapterNumberEdges.addLast(MissingChapterNumberEdge(edge.indexAbove(), aboveIsKnown = false, belowIsKnown = true))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
else -> {
|
||||
// shouldn't be possible
|
||||
reportErrorToUser { "Chapter number inference failed (case 4)!" }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* ThreadLocal [SimpleDateFormat] (SimpleDateFormat is not thread safe).
|
||||
*/
|
||||
private val absoluteDateFormat: ThreadLocal<SimpleDateFormat> = object : ThreadLocal<SimpleDateFormat>() {
|
||||
override fun initialValue() = runCatching { SimpleDateFormat("MMMM dd, yyyy", Locale.US) }.fold(
|
||||
onSuccess = { it },
|
||||
onFailure = { reportErrorToUser { "Invalid SimpleDateFormat(MMMM dd, yyyy)" } },
|
||||
)
|
||||
}
|
||||
|
||||
private val relativeChapterDateRegex = """(\d+)\s+(years?|months?|weeks?|days?|hours?|mins?|minutes?|seconds?|sec)\s+ago""".toRegex(RegexOption.IGNORE_CASE)
|
||||
|
||||
/**
|
||||
* Tries to parse a possibly human-readable relative [Date].
|
||||
*
|
||||
* @see Calendar
|
||||
*/
|
||||
private fun String.tryAnalyzeChapterDate(): Date? {
|
||||
return when (val match = relativeChapterDateRegex.matchEntire(trim())) {
|
||||
null -> {
|
||||
absoluteDateFormat.get()
|
||||
.runCatching { this!!.parse(this@tryAnalyzeChapterDate) }
|
||||
.fold(
|
||||
onSuccess = { it },
|
||||
onFailure = { reportErrorToUser { "Could not parse date: $this" } },
|
||||
)
|
||||
}
|
||||
|
||||
else -> {
|
||||
// relative
|
||||
val number: Int = match.groupValues[1].toInt()
|
||||
val relativity: String = match.groupValues[2]
|
||||
val cal: Calendar = Calendar.getInstance(TimeZone.getDefault(), Locale.US)
|
||||
|
||||
with(relativity) {
|
||||
when {
|
||||
startsWith("year") -> cal.add(Calendar.YEAR, -number)
|
||||
startsWith("month") -> cal.add(Calendar.MONTH, -number)
|
||||
startsWith("week") -> cal.add(Calendar.DAY_OF_MONTH, -number * 7)
|
||||
startsWith("day") -> cal.add(Calendar.DAY_OF_MONTH, -number)
|
||||
startsWith("hour") -> cal.add(Calendar.HOUR, -number)
|
||||
startsWith("min") -> cal.add(Calendar.MINUTE, -number)
|
||||
startsWith("sec") -> cal.add(Calendar.SECOND, -number)
|
||||
}
|
||||
}
|
||||
|
||||
cal.time
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,54 +0,0 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import okhttp3.HttpUrl
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrl
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrlOrNull
|
||||
import org.jsoup.nodes.Element
|
||||
|
||||
typealias NormalizedURL = HttpUrl
|
||||
|
||||
val NormalizedURL.rawAbsolute: String
|
||||
get() = toString()
|
||||
|
||||
private val psDomainURI = """https://projectsuki.com/""".toHttpUrl().toUri()
|
||||
|
||||
val NormalizedURL.rawRelative: String?
|
||||
get() {
|
||||
val uri = toUri()
|
||||
return psDomainURI
|
||||
.relativize(uri)
|
||||
.takeIf { it != uri }
|
||||
?.let { """/$it""" }
|
||||
}
|
||||
|
||||
private val protocolMatcher = """^https?://""".toRegex()
|
||||
private val domainMatcher = """^https?://(?:[a-zA-Z\d\-]+\.)+[a-zA-Z\d\-]+""".toRegex()
|
||||
fun String.toNormalURL(): NormalizedURL? {
|
||||
if (contains(':') && !contains(protocolMatcher)) {
|
||||
return null
|
||||
}
|
||||
|
||||
val toParse = StringBuilder()
|
||||
|
||||
if (!contains(domainMatcher)) {
|
||||
toParse.append("https://projectsuki.com")
|
||||
if (!this.startsWith("/")) toParse.append('/')
|
||||
}
|
||||
|
||||
toParse.append(this)
|
||||
|
||||
return toParse.toString().toHttpUrlOrNull()
|
||||
}
|
||||
|
||||
fun NormalizedURL.pathStartsWith(other: Iterable<String>): Boolean = pathSegments.zip(other).all { (l, r) -> l == r }
|
||||
|
||||
fun NormalizedURL.isPSUrl() = host.endsWith("${PS.identifier}.com")
|
||||
|
||||
fun NormalizedURL.isBookURL() = isPSUrl() && pathSegments.first() == "book"
|
||||
fun NormalizedURL.isReadURL() = isPSUrl() && pathStartsWith(PS.chapterPath)
|
||||
fun NormalizedURL.isImagesGalleryURL() = isPSUrl() && pathStartsWith(PS.pagePath)
|
||||
|
||||
fun Element.attrNormalizedUrl(attrName: String): NormalizedURL? {
|
||||
val attrValue = attr("abs:$attrName").takeIf { it.isNotBlank() } ?: return null
|
||||
return attrValue.toNormalURL()
|
||||
}
|
|
@ -1,129 +0,0 @@
|
|||
@file:Suppress("MayBeConstant", "unused")
|
||||
|
||||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import org.jsoup.nodes.Element
|
||||
import java.util.Calendar
|
||||
import java.util.Locale
|
||||
import kotlin.concurrent.getOrSet
|
||||
|
||||
@Suppress("MemberVisibilityCanBePrivate")
|
||||
internal object PS {
|
||||
const val identifier: String = "projectsuki"
|
||||
const val identifierShort: String = "ps"
|
||||
|
||||
val bookPath = listOf("book")
|
||||
val pagePath = listOf("images", "gallery")
|
||||
val chapterPath = listOf("read")
|
||||
|
||||
const val SEARCH_INTENT_PREFIX: String = "$identifierShort:"
|
||||
|
||||
const val PREFERENCE_WHITELIST_LANGUAGES = "$identifier-languages-whitelist"
|
||||
const val PREFERENCE_WHITELIST_LANGUAGES_TITLE = "Whitelist the following languages:"
|
||||
const val PREFERENCE_WHITELIST_LANGUAGES_SUMMARY =
|
||||
"Will keep project chapters in the following languages." +
|
||||
" Takes precedence over blacklisted languages." +
|
||||
" It will match the string present in the \"Language\" column of the chapter." +
|
||||
" Whitespaces will be trimmed." +
|
||||
" Leave empty to allow all languages." +
|
||||
" Separate each entry with a comma ','"
|
||||
|
||||
const val PREFERENCE_BLACKLIST_LANGUAGES = "$identifier-languages-blacklist"
|
||||
const val PREFERENCE_BLACKLIST_LANGUAGES_TITLE = "Blacklist the following languages:"
|
||||
const val PREFERENCE_BLACKLIST_LANGUAGES_SUMMARY =
|
||||
"Will hide project chapters in the following languages." +
|
||||
" Works identically to whitelisting."
|
||||
}
|
||||
|
||||
fun Element.containsBookLinks(): Boolean = select("a").any {
|
||||
it.attrNormalizedUrl("href")?.isBookURL() == true
|
||||
}
|
||||
|
||||
fun Element.containsReadLinks(): Boolean = select("a").any {
|
||||
it.attrNormalizedUrl("href")?.isReadURL() == true
|
||||
}
|
||||
|
||||
fun Element.containsImageGalleryLinks(): Boolean = select("a").any {
|
||||
it.attrNormalizedUrl("href")?.isImagesGalleryURL() == true
|
||||
}
|
||||
|
||||
fun Element.getAllUrlElements(selector: String, attrName: String, predicate: (NormalizedURL) -> Boolean): Map<Element, NormalizedURL> {
|
||||
return select(selector)
|
||||
.mapNotNull { element -> element.attrNormalizedUrl(attrName)?.let { element to it } }
|
||||
.filter { (_, url) -> predicate(url) }
|
||||
.toMap()
|
||||
}
|
||||
|
||||
fun Element.getAllBooks(): Map<String, PSBook> {
|
||||
val bookUrls = getAllUrlElements("a", "href") { it.isBookURL() }
|
||||
val byID: Map<String, Map<Element, NormalizedURL>> = bookUrls.groupBy { (_, url) -> url.pathSegments[1] /* /book/<bookid> */ }
|
||||
|
||||
@Suppress("UNCHECKED_CAST")
|
||||
return byID.mapValues { (bookid, elements) ->
|
||||
val thumb: Element? = elements.entries.firstNotNullOfOrNull { (element, _) ->
|
||||
element.select("img").firstOrNull()
|
||||
}
|
||||
val title = elements.entries.firstOrNull { (element, _) ->
|
||||
element.select("img").isEmpty() && element.text().let {
|
||||
it.isNotBlank() && it.lowercase(Locale.US) != "show more"
|
||||
}
|
||||
}
|
||||
|
||||
if (thumb != null && title != null) {
|
||||
PSBook(thumb, title.key, title.key.text(), bookid, title.value)
|
||||
} else {
|
||||
null
|
||||
}
|
||||
}.filterValues { it != null } as Map<String, PSBook>
|
||||
}
|
||||
|
||||
inline fun <SK, K, V> Map<K, V>.groupBy(keySelector: (Map.Entry<K, V>) -> SK): Map<SK, Map<K, V>> = buildMap<_, MutableMap<K, V>> {
|
||||
this@groupBy.entries.forEach { entry ->
|
||||
getOrPut(keySelector(entry)) { HashMap() }[entry.key] = entry.value
|
||||
}
|
||||
}
|
||||
|
||||
private val absoluteDateFormat: ThreadLocal<java.text.SimpleDateFormat> = ThreadLocal()
|
||||
fun String.parseDate(ifFailed: Long = 0L): Long {
|
||||
return when {
|
||||
endsWith("ago") -> {
|
||||
// relative
|
||||
val number = takeWhile { it.isDigit() }.toInt()
|
||||
val cal = Calendar.getInstance()
|
||||
|
||||
when {
|
||||
contains("day") -> cal.apply { add(Calendar.DAY_OF_MONTH, -number) }
|
||||
contains("hour") -> cal.apply { add(Calendar.HOUR, -number) }
|
||||
contains("minute") -> cal.apply { add(Calendar.MINUTE, -number) }
|
||||
contains("second") -> cal.apply { add(Calendar.SECOND, -number) }
|
||||
contains("week") -> cal.apply { add(Calendar.DAY_OF_MONTH, -number * 7) }
|
||||
contains("month") -> cal.apply { add(Calendar.MONTH, -number) }
|
||||
contains("year") -> cal.apply { add(Calendar.YEAR, -number) }
|
||||
else -> null
|
||||
}?.timeInMillis ?: ifFailed
|
||||
}
|
||||
|
||||
else -> {
|
||||
// absolute?
|
||||
absoluteDateFormat.getOrSet { java.text.SimpleDateFormat("MMMM dd, yyyy", Locale.US) }.parse(this)?.time ?: ifFailed
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private val imageExtensions = setOf(".jpg", ".png", ".jpeg", ".webp", ".gif", ".avif", ".tiff")
|
||||
private val simpleSrcVariants = listOf("src", "data-src", "data-lazy-src")
|
||||
fun Element.imgNormalizedURL(): NormalizedURL? {
|
||||
simpleSrcVariants.forEach { variant ->
|
||||
if (hasAttr(variant)) {
|
||||
return attrNormalizedUrl(variant)
|
||||
}
|
||||
}
|
||||
|
||||
if (hasAttr("srcset")) {
|
||||
return attr("abs:srcset").substringBefore(" ").toNormalURL()
|
||||
}
|
||||
|
||||
return attributes().firstOrNull {
|
||||
it.key.contains("src") && imageExtensions.any { ext -> it.value.contains(ext) }
|
||||
}?.value?.substringBefore(" ")?.toNormalURL()
|
||||
}
|
|
@ -1,11 +0,0 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import org.jsoup.nodes.Element
|
||||
|
||||
data class PSBook(
|
||||
val imgElement: Element,
|
||||
val titleElement: Element,
|
||||
val title: String,
|
||||
val mangaID: String,
|
||||
val url: NormalizedURL,
|
||||
)
|
|
@ -1,90 +0,0 @@
|
|||
@file:Suppress("CanSealedSubClassBeObject")
|
||||
|
||||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import eu.kanade.tachiyomi.source.model.Filter
|
||||
import okhttp3.HttpUrl
|
||||
|
||||
@Suppress("NOTHING_TO_INLINE")
|
||||
object PSFilters {
|
||||
internal sealed interface AutoFilter {
|
||||
fun applyTo(builder: HttpUrl.Builder)
|
||||
}
|
||||
|
||||
private inline fun HttpUrl.Builder.setAdv() = setQueryParameter("adv", "1")
|
||||
|
||||
class Author : Filter.Text("Author"), AutoFilter {
|
||||
|
||||
override fun applyTo(builder: HttpUrl.Builder) {
|
||||
when {
|
||||
state.isNotBlank() -> builder.setAdv().addQueryParameter("author", state)
|
||||
}
|
||||
}
|
||||
|
||||
companion object {
|
||||
val ownHeader by lazy { Header("Cannot search by multiple authors") }
|
||||
}
|
||||
}
|
||||
|
||||
class Artist : Filter.Text("Artist"), AutoFilter {
|
||||
|
||||
override fun applyTo(builder: HttpUrl.Builder) {
|
||||
when {
|
||||
state.isNotBlank() -> builder.setAdv().addQueryParameter("artist", state)
|
||||
}
|
||||
}
|
||||
|
||||
companion object {
|
||||
val ownHeader by lazy { Header("Cannot search by multiple artists") }
|
||||
}
|
||||
}
|
||||
|
||||
class Status : Filter.Select<Status.Value>("Status", Value.values()), AutoFilter {
|
||||
enum class Value(val display: String, val query: String) {
|
||||
ANY("Any", ""),
|
||||
ONGOING("Ongoing", "ongoing"),
|
||||
COMPLETED("Completed", "completed"),
|
||||
HIATUS("Hiatus", "hiatus"),
|
||||
CANCELLED("Cancelled", "cancelled"),
|
||||
;
|
||||
|
||||
override fun toString(): String = display
|
||||
|
||||
companion object {
|
||||
private val values: Array<Value> = values()
|
||||
operator fun get(ordinal: Int) = values[ordinal]
|
||||
}
|
||||
}
|
||||
|
||||
override fun applyTo(builder: HttpUrl.Builder) {
|
||||
when (val state = Value[state]) {
|
||||
Value.ANY -> {} // default, do nothing
|
||||
else -> builder.setAdv().addQueryParameter("status", state.query)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
class Origin : Filter.Select<Origin.Value>("Origin", Value.values()), AutoFilter {
|
||||
enum class Value(val display: String, val query: String?) {
|
||||
ANY("Any", null),
|
||||
KOREA("Korea", "kr"),
|
||||
CHINA("China", "cn"),
|
||||
JAPAN("Japan", "jp"),
|
||||
;
|
||||
|
||||
override fun toString(): String = display
|
||||
|
||||
companion object {
|
||||
private val values: Array<Value> = Value.values()
|
||||
operator fun get(ordinal: Int) = values[ordinal]
|
||||
}
|
||||
}
|
||||
|
||||
override fun applyTo(builder: HttpUrl.Builder) {
|
||||
when (val state = Value[state]) {
|
||||
Value.ANY -> {} // default, do nothing
|
||||
else -> builder.setAdv().addQueryParameter("origin", state.query)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,84 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import okhttp3.HttpUrl
|
||||
|
||||
/**
|
||||
* @see EXTENSION_INFO Found in ProjectSuki.kt
|
||||
*/
|
||||
@Suppress("unused")
|
||||
private inline val INFO: Nothing get() = error("INFO")
|
||||
|
||||
/**
|
||||
* Utility class made to help identify different urls.
|
||||
*
|
||||
* null regex means wildcard, matches anything.
|
||||
*
|
||||
* Meant to be used with [matchAgainst], will match against [HttpUrl.pathSegments]
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
data class PathPattern(val paths: List<Regex?>) {
|
||||
constructor(vararg paths: Regex?) : this(paths.asList())
|
||||
|
||||
init {
|
||||
if (paths.isEmpty()) {
|
||||
reportErrorToUser {
|
||||
"Invalid PathPattern, cannot be empty!"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Utility class to represent the [MatchResult]s obtained when matching a [PathPattern]
|
||||
* against an [HttpUrl].
|
||||
*
|
||||
* When [matchResults] is null, it means the [HttpUrl] either:
|
||||
* - when `allowSubPaths` in [matchAgainst] is `false`: [HttpUrl.pathSegments]`.size` != [PathPattern.paths]`.size`
|
||||
* - when `allowSubPaths` in [matchAgainst] is `true`: [HttpUrl.pathSegments]`.size` < [PathPattern.paths]`.size`
|
||||
*
|
||||
* @see matchAgainst
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
data class PathMatchResult(val doesMatch: Boolean, val matchResults: List<MatchResult?>?) {
|
||||
operator fun get(name: String): MatchGroup? = matchResults?.firstNotNullOfOrNull {
|
||||
it?.groups
|
||||
// this throws if the group by "name" isn't found AND can return null too
|
||||
?.runCatching { get(name) }
|
||||
?.getOrNull()
|
||||
}
|
||||
|
||||
init {
|
||||
if (matchResults?.isEmpty() == true) {
|
||||
reportErrorToUser {
|
||||
"Invalid PathMatchResult, matchResults must either be null or not empty!"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @see PathPattern
|
||||
* @see PathMatchResult
|
||||
*/
|
||||
fun HttpUrl.matchAgainst(pattern: PathPattern, allowSubPaths: Boolean = false, ignoreEmptySegments: Boolean = true): PathMatchResult {
|
||||
val actualSegments: List<String> = if (ignoreEmptySegments) pathSegments.filter { it.isNotBlank() } else pathSegments
|
||||
val sizeReq = when (allowSubPaths) {
|
||||
false -> actualSegments.size == pattern.paths.size
|
||||
true -> actualSegments.size >= pattern.paths.size
|
||||
}
|
||||
|
||||
if (!sizeReq) return PathMatchResult(false, null)
|
||||
|
||||
val matchResults: MutableList<MatchResult?> = ArrayList()
|
||||
var matches = true
|
||||
|
||||
actualSegments.zip(pattern.paths) { segment, regex ->
|
||||
val match: MatchResult? = regex?.matchEntire(segment)
|
||||
matchResults.add(match)
|
||||
matches = matches && (regex == null || match != null)
|
||||
}
|
||||
|
||||
return PathMatchResult(matches, matchResults)
|
||||
}
|
|
@ -1,15 +1,10 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import android.app.Application
|
||||
import android.content.SharedPreferences
|
||||
import androidx.preference.EditTextPreference
|
||||
import androidx.preference.PreferenceScreen
|
||||
import eu.kanade.tachiyomi.lib.randomua.addRandomUAPreferenceToScreen
|
||||
import eu.kanade.tachiyomi.lib.randomua.getPrefCustomUA
|
||||
import eu.kanade.tachiyomi.lib.randomua.getPrefUAType
|
||||
import eu.kanade.tachiyomi.lib.randomua.setRandomUserAgent
|
||||
import eu.kanade.tachiyomi.network.GET
|
||||
import eu.kanade.tachiyomi.network.POST
|
||||
import eu.kanade.tachiyomi.network.asObservableSuccess
|
||||
import eu.kanade.tachiyomi.network.interceptor.rateLimit
|
||||
import eu.kanade.tachiyomi.source.ConfigurableSource
|
||||
|
@ -22,241 +17,441 @@ import eu.kanade.tachiyomi.source.model.SManga
|
|||
import eu.kanade.tachiyomi.source.model.UpdateStrategy
|
||||
import eu.kanade.tachiyomi.source.online.HttpSource
|
||||
import eu.kanade.tachiyomi.util.asJsoup
|
||||
import kotlinx.serialization.encodeToString
|
||||
import kotlinx.serialization.json.Json
|
||||
import kotlinx.serialization.json.jsonObject
|
||||
import kotlinx.serialization.json.jsonPrimitive
|
||||
import okhttp3.HttpUrl
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrl
|
||||
import okhttp3.MediaType.Companion.toMediaType
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrlOrNull
|
||||
import okhttp3.OkHttpClient
|
||||
import okhttp3.Request
|
||||
import okhttp3.RequestBody.Companion.toRequestBody
|
||||
import okhttp3.Response
|
||||
import org.jsoup.Jsoup
|
||||
import org.jsoup.nodes.Element
|
||||
import org.jsoup.nodes.Document
|
||||
import rx.Observable
|
||||
import uy.kohesive.injekt.Injekt
|
||||
import uy.kohesive.injekt.api.get
|
||||
import java.net.URI
|
||||
import java.util.Locale
|
||||
import java.util.concurrent.TimeUnit
|
||||
import kotlin.math.floor
|
||||
import kotlin.math.log10
|
||||
import kotlin.math.pow
|
||||
|
||||
/**
|
||||
* [Project Suki](https://projectsuki.com)
|
||||
* [Tachiyomi](https://github.com/tachiyomiorg/tachiyomi)
|
||||
* [extension](https://github.com/tachiyomiorg/tachiyomi-extensions)
|
||||
*
|
||||
* Most of the code should be documented, `@author` KDoc tags are mostly to know
|
||||
* who to bother *when necessary*.
|
||||
* If you contributed to this extension, be sure to add yourself in an `@author` tag!
|
||||
*
|
||||
* If you want to understand how this extension works,
|
||||
* I recommend first looking at [ProjectSuki], then [DataExtractor],
|
||||
* then the rest of the project.
|
||||
*/
|
||||
internal inline val EXTENSION_INFO: Nothing get() = error("EXTENSION_INFO")
|
||||
|
||||
internal const val SHORT_FORM_ID: String = """ps"""
|
||||
|
||||
internal val homepageUrl: HttpUrl = "https://projectsuki.com".toHttpUrl()
|
||||
internal val homepageUri: URI = homepageUrl.toUri()
|
||||
|
||||
/** PATTERN: `https://projectsuki.com/book/<bookid>` */
|
||||
internal val bookUrlPattern = PathPattern(
|
||||
"""book""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<bookid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
|
||||
/** PATTERN: `https://projectsuki.com/browse/<pagenum>` */
|
||||
@Suppress("unused")
|
||||
class ProjectSuki : HttpSource(), ConfigurableSource {
|
||||
override val name: String = "Project Suki"
|
||||
override val baseUrl: String = "https://projectsuki.com"
|
||||
override val lang: String = "en"
|
||||
|
||||
private val preferences: SharedPreferences by lazy {
|
||||
Injekt.get<Application>().getSharedPreferences("source_$id", 0x0000)
|
||||
}
|
||||
|
||||
private fun String.processLangPref(): List<String> = split(",").map { it.trim().lowercase(Locale.US) }
|
||||
|
||||
private val SharedPreferences.whitelistedLanguages: List<String>
|
||||
get() = getString(PS.PREFERENCE_WHITELIST_LANGUAGES, "")!!
|
||||
.processLangPref()
|
||||
|
||||
private val SharedPreferences.blacklistedLanguages: List<String>
|
||||
get() = getString(PS.PREFERENCE_BLACKLIST_LANGUAGES, "")!!
|
||||
.processLangPref()
|
||||
|
||||
override fun setupPreferenceScreen(screen: PreferenceScreen) {
|
||||
addRandomUAPreferenceToScreen(screen)
|
||||
|
||||
screen.addPreference(
|
||||
EditTextPreference(screen.context).apply {
|
||||
key = PS.PREFERENCE_WHITELIST_LANGUAGES
|
||||
title = PS.PREFERENCE_WHITELIST_LANGUAGES_TITLE
|
||||
summary = PS.PREFERENCE_WHITELIST_LANGUAGES_SUMMARY
|
||||
},
|
||||
internal val browsePattern = PathPattern(
|
||||
"""browse""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<pagenum>\d+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
|
||||
screen.addPreference(
|
||||
EditTextPreference(screen.context).apply {
|
||||
key = PS.PREFERENCE_BLACKLIST_LANGUAGES
|
||||
title = PS.PREFERENCE_BLACKLIST_LANGUAGES_TITLE
|
||||
summary = PS.PREFERENCE_BLACKLIST_LANGUAGES_SUMMARY
|
||||
},
|
||||
/**
|
||||
* PATTERN: `https://projectsuki.com/read/<bookid>/<chapterid>/<startpage>`
|
||||
*
|
||||
* `<startpage>` is actually a filter of sorts that will remove pages < `<startpage>`'s value.
|
||||
*/
|
||||
internal val chapterUrlPattern = PathPattern(
|
||||
"""read""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<bookid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<chapterid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<startpage>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
}
|
||||
|
||||
override val client: OkHttpClient = network.cloudflareClient.newBuilder()
|
||||
.setRandomUserAgent(
|
||||
userAgentType = preferences.getPrefUAType(),
|
||||
customUA = preferences.getPrefCustomUA(),
|
||||
filterInclude = listOf("chrome"),
|
||||
/**
|
||||
* PATTERNS:
|
||||
* - `https://projectsuki.com/images/gallery/<bookid>/thumb`
|
||||
* - `https://projectsuki.com/images/gallery/<bookid>/thumb.<thumbextension>`
|
||||
* - `https://projectsuki.com/images/gallery/<bookid>/<thumbwidth>-thumb`
|
||||
* - `https://projectsuki.com/images/gallery/<bookid>/<thumbwidth>-thumb.<thumbextension>`
|
||||
*/
|
||||
internal val thumbnailUrlPattern = PathPattern(
|
||||
"""images""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""gallery""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<bookid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<thumbwidth>\d+-)?thumb(?:\.(?<thumbextension>.+))?""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
.rateLimit(4)
|
||||
|
||||
/** PATTERN: `https://projectsuki.com/images/gallery/<bookid>/<uuid>/<pagenum>` */
|
||||
internal val pageUrlPattern = PathPattern(
|
||||
"""images""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""gallery""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<bookid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<uuid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<pagenum>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
|
||||
/** PATTERN: `https://projectsuki.com/genre/<genre>` */
|
||||
internal val genreSearchUrlPattern = PathPattern(
|
||||
"""genre""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<genre>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
|
||||
/** PATTERN: `https://projectsuki.com/group/<groupid>` */
|
||||
@Suppress("unused")
|
||||
internal val groupUrlPattern = PathPattern(
|
||||
"""group""".toRegex(RegexOption.IGNORE_CASE),
|
||||
"""(?<groupid>.+)""".toRegex(RegexOption.IGNORE_CASE),
|
||||
)
|
||||
|
||||
/**
|
||||
* Used on the website when there's an image loading error, could be used in extension.
|
||||
*/
|
||||
@Suppress("unused")
|
||||
internal val emptyImageUrl: HttpUrl = homepageUrl.newBuilder()
|
||||
.addPathSegment("images")
|
||||
.addPathSegment("gallery")
|
||||
.addPathSegment("empty.jpg")
|
||||
.build()
|
||||
|
||||
override fun popularMangaRequest(page: Int) = GET(baseUrl, headers)
|
||||
/**
|
||||
* Removes the [URL's](https://en.wikipedia.org/wiki/URL) host and scheme/protocol,
|
||||
* leaving only the path, query and fragment, *without leading `/`*
|
||||
*
|
||||
* @see URI.relativize
|
||||
*/
|
||||
internal val HttpUrl.rawRelative: String?
|
||||
get() {
|
||||
val uri = toUri()
|
||||
val relative = homepageUri.relativize(uri)
|
||||
return when {
|
||||
uri === relative -> null
|
||||
else -> relative.toASCIIString()
|
||||
}
|
||||
}
|
||||
|
||||
// differentiating between popular and latest manga in the main page is
|
||||
// *theoretically possible* but a pain, as such, this is fine "for now"
|
||||
internal val reportPrefix: String
|
||||
get() = """Error! Report on GitHub (tachiyomiorg/tachiyomi-extensions)"""
|
||||
|
||||
/** Just throw an [error], which will get caught by Tachiyomi: the message will be exposed as a [toast][android.widget.Toast]. */
|
||||
internal inline fun reportErrorToUser(message: () -> String): Nothing {
|
||||
error("""$reportPrefix: ${message()}""")
|
||||
}
|
||||
|
||||
/** Used when chapters don't have a [Language][DataExtractor.ChaptersTableColumnDataType.Language] column (if that ever happens). */
|
||||
internal const val UNKNOWN_LANGUAGE: String = "unknown"
|
||||
|
||||
/**
|
||||
* Actual Tachiyomi extension, ties everything together.
|
||||
*
|
||||
* Most of the work happens in [DataExtractor], [ProjectSukiAPI], [ProjectSukiFilters] and [ProjectSukiPreferences].
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
@Suppress("unused")
|
||||
class ProjectSuki : HttpSource(), ConfigurableSource {
|
||||
|
||||
override val name: String = "Project Suki"
|
||||
override val baseUrl: String = homepageUri.toASCIIString()
|
||||
override val lang: String = "all"
|
||||
override val id: Long = 8965918600406781666L
|
||||
|
||||
/** Handles extension preferences found in Extensions > Project Suki > Gear icon */
|
||||
private val preferences = ProjectSukiPreferences(id)
|
||||
|
||||
/** See [Kotlinx-Serialization](https://github.com/Kotlin/kotlinx.serialization). */
|
||||
private val json: Json = Json {
|
||||
ignoreUnknownKeys = true
|
||||
explicitNulls = true
|
||||
encodeDefaults = true
|
||||
}
|
||||
|
||||
override fun setupPreferenceScreen(screen: PreferenceScreen) {
|
||||
with(preferences) { screen.configure() }
|
||||
}
|
||||
|
||||
/**
|
||||
* [OkHttp's](https://square.github.io/okhttp/) [OkHttpClient] that handles network requests and responses.
|
||||
*
|
||||
* Thanks to Tachiyomi's [NetworkHelper](https://github.com/tachiyomiorg/tachiyomi/blob/58daedc89ee18d04e7af5bab12629680dba4096c/core/src/main/java/eu/kanade/tachiyomi/network/NetworkHelper.kt#L21C12-L21C12)
|
||||
* (this is a permalink, check for updated version),
|
||||
* most client options are already set as they should be, including the [Cache][okhttp3.Cache].
|
||||
*/
|
||||
override val client: OkHttpClient = network.client.newBuilder()
|
||||
.setRandomUserAgent(
|
||||
userAgentType = preferences.shared.getPrefUAType(),
|
||||
customUA = preferences.shared.getPrefCustomUA(),
|
||||
)
|
||||
.rateLimit(2, 1, TimeUnit.SECONDS)
|
||||
.build()
|
||||
|
||||
/**
|
||||
* Specify what request will be sent to the server.
|
||||
*
|
||||
* This specific method returns a [GET](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods)
|
||||
* request to be sent to [https://projectsuki.com/browse](https://projectsuki.com/browse).
|
||||
*
|
||||
* Using the default [HttpSource]'s [Headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers).
|
||||
*/
|
||||
override fun popularMangaRequest(page: Int) = GET(
|
||||
homepageUrl.newBuilder()
|
||||
.addPathSegment("browse")
|
||||
.addPathSegment((page - 1).toString()) // starts at 0
|
||||
.build(),
|
||||
headers,
|
||||
)
|
||||
|
||||
/** Whether or not this extension supports the "Latest" tab. */
|
||||
override val supportsLatest: Boolean get() = true
|
||||
|
||||
/** Same concept as [popularMangaRequest], but is sent to [https://projectsuki.com/](https://projectsuki.com/). */
|
||||
override fun latestUpdatesRequest(page: Int) = GET(homepageUrl, headers)
|
||||
|
||||
/**
|
||||
* Utility to find and apply a filter specified by [T],
|
||||
* see [reified](https://kotlinlang.org/docs/inline-functions.html#reified-type-parameters)
|
||||
* if you're not familiar with the concept.
|
||||
*/
|
||||
private inline fun <reified T> HttpUrl.Builder.applyPSFilter(
|
||||
from: FilterList,
|
||||
): HttpUrl.Builder where T : Filter<*>, T : ProjectSukiFilters.ProjectSukiFilter = apply {
|
||||
from.firstNotNullOfOrNull { it as? T }?.run { applyFilter() }
|
||||
}
|
||||
|
||||
/**
|
||||
* Same concept as [popularMangaRequest], but is sent to [https://projectsuki.com/search](https://projectsuki.com/search).
|
||||
* This is the [Full-Site][ProjectSukiFilters.SearchMode.FULL_SITE] variant of search, it *will* return results that have no chapters.
|
||||
*/
|
||||
override fun searchMangaRequest(page: Int, query: String, filters: FilterList): Request {
|
||||
return GET(
|
||||
homepageUrl.newBuilder()
|
||||
.addPathSegment("search")
|
||||
.addQueryParameter("page", (page - 1).toString())
|
||||
.addQueryParameter("q", query)
|
||||
.applyPSFilter<ProjectSukiFilters.Origin>(from = filters)
|
||||
.applyPSFilter<ProjectSukiFilters.Status>(from = filters)
|
||||
.applyPSFilter<ProjectSukiFilters.Author>(from = filters)
|
||||
.applyPSFilter<ProjectSukiFilters.Artist>(from = filters)
|
||||
.build(),
|
||||
headers,
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles the server's [Response] that was returned from [popularMangaRequest]'s [Request].
|
||||
*
|
||||
* Because we asked the server for a webpage, it will return, in the [Request's body][okhttp3.RequestBody],
|
||||
* the [html](https://developer.mozilla.org/en-US/docs/Web/HTML) that makes up that page,
|
||||
* including any [css](https://developer.mozilla.org/en-US/docs/Web/CSS) and
|
||||
* [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript) in `<script>` tags.
|
||||
*
|
||||
* NOTE: [Jsoup](https://jsoup.org/) is not a browser, but an HTML parser and manipulator,
|
||||
* as such no JavaScript will actually run.
|
||||
* The html that you can see from a browser's [dev-tools](https://github.com/firefox-devtools)
|
||||
* could be very different from the initial state of the web-page's HTML,
|
||||
* especially for pages that use [hydration-heavy](https://en.wikipedia.org/wiki/Hydration_(web_development))
|
||||
* [JavaScript frameworks](https://developer.mozilla.org/en-US/docs/Learn/Tools_and_testing/Client-side_JavaScript_frameworks).
|
||||
*
|
||||
* To see the initial contents of a response, you can use an API tool like [REQBIN](https://reqbin.com/).
|
||||
*
|
||||
* [SManga]'s url should be in relative form, see [this SO answer](https://stackoverflow.com/a/21828923)
|
||||
* for a comprehensive difference between relative and absolute URLs.
|
||||
*
|
||||
* [SManga]'s thumbnail_url should instead be in absolute form. If possible [it should be set](https://github.com/tachiyomiorg/tachiyomi-extensions/blob/master/CONTRIBUTING.md#popular-manga)
|
||||
* at this point to avoid additional server requests. But if that is not possible, [fetchMangaDetails] will be called to fill in the details.
|
||||
*/
|
||||
override fun popularMangaParse(response: Response): MangasPage {
|
||||
val document = response.asJsoup()
|
||||
val allBooks = document.getAllBooks()
|
||||
return MangasPage(
|
||||
mangas = allBooks.mapNotNull mangas@{ (_, psbook) ->
|
||||
val (img, _, titleText, _, url) = psbook
|
||||
val document: Document = response.asJsoup()
|
||||
|
||||
val relativeUrl = url.rawRelative ?: return@mangas null
|
||||
val extractor = DataExtractor(document)
|
||||
val books: Set<DataExtractor.PSBook> = extractor.books
|
||||
|
||||
val mangas: List<SManga> = books.map { book ->
|
||||
SManga.create().apply {
|
||||
this.url = relativeUrl
|
||||
this.title = titleText
|
||||
this.thumbnail_url = img.imgNormalizedURL()?.rawAbsolute
|
||||
this.url = book.bookUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${book.bookUrl}" }
|
||||
this.title = book.rawTitle
|
||||
this.thumbnail_url = book.thumbnail.toUri().toASCIIString()
|
||||
}
|
||||
}
|
||||
|
||||
return MangasPage(
|
||||
mangas = mangas,
|
||||
hasNextPage = mangas.size >= 30, // observed max number of results in search,
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Very similar to [popularMangaParse].
|
||||
*
|
||||
* Due to Project Suki's [home page](https://projectsuki.com) design,
|
||||
* differentiating between actually-latest chapters, "Trending" and "New additions",
|
||||
* is theoretically possible, but would be fragile and possibly error-prone.
|
||||
*
|
||||
* So we just grab everything in the homepage.
|
||||
* [DataExtractor.books] automatically de-duplicates based on [BookID].
|
||||
*/
|
||||
override fun latestUpdatesParse(response: Response): MangasPage {
|
||||
val document: Document = response.asJsoup()
|
||||
|
||||
val extractor = DataExtractor(document)
|
||||
val books: Set<DataExtractor.PSBook> = extractor.books
|
||||
|
||||
return MangasPage(
|
||||
mangas = books.map { book ->
|
||||
SManga.create().apply {
|
||||
this.url = book.bookUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${book.bookUrl}" }
|
||||
this.title = book.rawTitle
|
||||
this.thumbnail_url = book.thumbnail.toUri().toASCIIString()
|
||||
}
|
||||
},
|
||||
hasNextPage = false,
|
||||
)
|
||||
}
|
||||
|
||||
override val supportsLatest: Boolean = false
|
||||
override fun latestUpdatesRequest(page: Int): Request = throw UnsupportedOperationException()
|
||||
override fun latestUpdatesParse(response: Response): MangasPage = throw UnsupportedOperationException()
|
||||
|
||||
/**
|
||||
* Function that is responsible for providing Tachiyomi with an [Observable](https://reactivex.io/documentation/observable.html)
|
||||
* that will return a single [MangasPage] (Tachiyomi uses [Observable.single] behind the scenes).
|
||||
*
|
||||
* Note that you shouldn't use [Observable.never] to convey "there are no mangas",
|
||||
* but [Observable.just(MangasPage(emptyList(), false))][Observable.just].
|
||||
* Otherwise Tachiyomi will just wait for the Observable forever (or until a timeout).
|
||||
*
|
||||
* Most of the times you won't need to override this function: [searchMangaRequest] and [searchMangaParse] will suffice.
|
||||
* But if you need to replace the default search behaviour (e.g. because of an [Url Activity][ProjectSukiSearchUrlActivity]),
|
||||
* you might need to override this function.
|
||||
*/
|
||||
override fun fetchSearchManga(page: Int, query: String, filters: FilterList): Observable<MangasPage> {
|
||||
val searchMode: ProjectSukiFilters.SearchMode = filters.filterIsInstance<ProjectSukiFilters.SearchModeFilter>()
|
||||
.singleOrNull()
|
||||
?.state
|
||||
?.let { ProjectSukiFilters.SearchMode[it] } ?: ProjectSukiFilters.SearchMode.SMART
|
||||
|
||||
return when {
|
||||
/*query.startsWith(PS.SEARCH_INTENT_PREFIX) -> {
|
||||
val id = query.substringAfter(PS.SEARCH_INTENT_PREFIX)
|
||||
client.newCall(getMangaByIdAsSearchResult(id))
|
||||
// sent by the url activity, might also be because the user entered a query via $ps:
|
||||
// but that won't really happen unless the user wants to do that
|
||||
query.startsWith(INTENT_QUERY_PREFIX) -> {
|
||||
val urlQuery = query.removePrefix(INTENT_QUERY_PREFIX)
|
||||
if (urlQuery.isBlank()) error("Empty search query!")
|
||||
|
||||
val rawUrl = """${homepageUri.toASCIIString()}/search?$urlQuery"""
|
||||
val url = rawUrl.toHttpUrlOrNull() ?: reportErrorToUser {
|
||||
"Invalid search url: $rawUrl"
|
||||
}
|
||||
|
||||
client.newCall(GET(url, headers))
|
||||
.asObservableSuccess()
|
||||
.map { response -> searchMangaParse(response, false) }
|
||||
}
|
||||
|
||||
// use result from https://projectsuki.com/api/book/search
|
||||
searchMode == ProjectSukiFilters.SearchMode.SMART || searchMode == ProjectSukiFilters.SearchMode.SIMPLE -> {
|
||||
val simpleMode = searchMode == ProjectSukiFilters.SearchMode.SIMPLE
|
||||
|
||||
client.newCall(ProjectSukiAPI.bookSearchRequest(json, headers))
|
||||
.asObservableSuccess()
|
||||
.map { response -> ProjectSukiAPI.parseBookSearchResponse(json, response) }
|
||||
.map { data -> data.toMangasPage(query, simpleMode) }
|
||||
}
|
||||
|
||||
// use https://projectsuki.com/search
|
||||
else -> client.newCall(searchMangaRequest(page, query, filters))
|
||||
.asObservableSuccess()
|
||||
.map { response -> searchMangaParse(response) }
|
||||
}*/
|
||||
|
||||
else -> Observable.defer {
|
||||
try {
|
||||
client.newCall(searchMangaRequest(page, query, filters))
|
||||
.asObservableSuccess()
|
||||
} catch (e: NoClassDefFoundError) {
|
||||
throw RuntimeException(e)
|
||||
}
|
||||
}.map { response -> searchMangaParse(response) }
|
||||
}
|
||||
}
|
||||
|
||||
override fun searchMangaRequest(page: Int, query: String, filters: FilterList): Request {
|
||||
return GET(
|
||||
baseUrl.toHttpUrl().newBuilder().apply {
|
||||
addPathSegment("search")
|
||||
addQueryParameter("page", (page - 1).toString())
|
||||
addQueryParameter("q", query)
|
||||
|
||||
filters.applyFilter<PSFilters.Origin>(this)
|
||||
filters.applyFilter<PSFilters.Status>(this)
|
||||
filters.applyFilter<PSFilters.Author>(this)
|
||||
filters.applyFilter<PSFilters.Artist>(this)
|
||||
}.build(),
|
||||
headers,
|
||||
)
|
||||
private fun filterList(vararg sequences: Sequence<Filter<*>>): FilterList {
|
||||
return FilterList(sequences.asSequence().flatten().toList())
|
||||
}
|
||||
|
||||
private inline fun <reified T> FilterList.applyFilter(to: HttpUrl.Builder) where T : Filter<*>, T : PSFilters.AutoFilter {
|
||||
firstNotNullOfOrNull { it as? T }?.applyTo(to)
|
||||
}
|
||||
|
||||
override fun getFilterList() = FilterList(
|
||||
Filter.Header("Filters only take effect when searching for something!"),
|
||||
PSFilters.Origin(),
|
||||
PSFilters.Status(),
|
||||
PSFilters.Author.ownHeader,
|
||||
PSFilters.Author(),
|
||||
PSFilters.Artist.ownHeader,
|
||||
PSFilters.Artist(),
|
||||
/**
|
||||
* Should return a fresh [FilterList] containing fresh (new) instances
|
||||
* of all the filters you want to be available.
|
||||
* Otherwise things like the reset button won't work.
|
||||
*/
|
||||
override fun getFilterList(): FilterList = filterList(
|
||||
ProjectSukiFilters.headersSequence(preferences),
|
||||
ProjectSukiFilters.filtersSequence(preferences),
|
||||
ProjectSukiFilters.footersSequence(preferences),
|
||||
)
|
||||
|
||||
override fun searchMangaParse(response: Response): MangasPage {
|
||||
/**
|
||||
* Very similar to [popularMangaParse].
|
||||
*
|
||||
* Unfortunately, because projectsuki has a "Next" button even when the next page is empty,
|
||||
* it's useless to depend on that, so we just use a simple heuristic to determine
|
||||
* if the page contained in the [Response] is the last one.
|
||||
*
|
||||
* The heuristic *will* fail if the last page has 30 or more entries.
|
||||
*/
|
||||
override fun searchMangaParse(response: Response): MangasPage = searchMangaParse(response, null)
|
||||
|
||||
/** [searchMangaParse] extended with [overrideHasNextPage]. */
|
||||
private fun searchMangaParse(response: Response, overrideHasNextPage: Boolean? = null): MangasPage {
|
||||
val document = response.asJsoup()
|
||||
val allBooks = document.getAllBooks()
|
||||
|
||||
val mangas = allBooks.mapNotNull mangas@{ (_, psbook) ->
|
||||
val (img, _, titleText, _, url) = psbook
|
||||
|
||||
val relativeUrl = url.rawRelative ?: return@mangas null
|
||||
val extractor = DataExtractor(document)
|
||||
val books: Set<DataExtractor.PSBook> = extractor.books
|
||||
|
||||
val mangas = books.map { book ->
|
||||
SManga.create().apply {
|
||||
this.url = relativeUrl
|
||||
this.title = titleText
|
||||
this.thumbnail_url = img.imgNormalizedURL()?.rawAbsolute
|
||||
this.url = book.bookUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${book.bookUrl}" }
|
||||
this.title = book.rawTitle
|
||||
this.thumbnail_url = book.thumbnail.toUri().toASCIIString()
|
||||
}
|
||||
}
|
||||
|
||||
return MangasPage(
|
||||
mangas = mangas,
|
||||
hasNextPage = mangas.size >= 30, // observed max number of results in search
|
||||
hasNextPage = overrideHasNextPage ?: (mangas.size >= 30),
|
||||
)
|
||||
}
|
||||
|
||||
override fun fetchMangaDetails(manga: SManga): Observable<SManga> {
|
||||
return client.newCall(mangaDetailsRequest(manga))
|
||||
.asObservableSuccess()
|
||||
.map { response ->
|
||||
mangaDetailsParse(response, incomplete = manga).apply { initialized = true }
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Handles the [Response] given by [mangaDetailsRequest]'s [Request].
|
||||
* [HttpSource]'s inheritors will have a default [mangaDetailsRequest] that asks for:
|
||||
* ```
|
||||
* GET(baseUrl + manga.url, headers)
|
||||
* ```
|
||||
* You can override [mangaDetailsRequest] if this is not the case for you.
|
||||
*
|
||||
* Fills out all [SManga]'s fields:
|
||||
* - url (relative)
|
||||
* - title
|
||||
* - artist
|
||||
* - author
|
||||
* - description
|
||||
* - genre (comma-separated list of genres)
|
||||
* - status (one of the constants in [SManga.Companion])
|
||||
* - thumbnail_url (absolute)
|
||||
* - update_strategy (enum [UpdateStrategy])
|
||||
*
|
||||
* Note that you should use [SManga.create] instead of implementing your own version of [SManga].
|
||||
*/
|
||||
override fun mangaDetailsParse(response: Response): SManga {
|
||||
val document: Document = response.asJsoup()
|
||||
val extractor = DataExtractor(document)
|
||||
|
||||
private val displayNoneMatcher = """display: ?none;""".toRegex()
|
||||
private val emptyImageURLAbsolute = """https://projectsuki.com/images/gallery/empty.jpg""".toNormalURL()!!.rawAbsolute
|
||||
private val emptyImageURLRelative = """https://projectsuki.com/images/gallery/empty.jpg""".toNormalURL()!!.rawRelative!!
|
||||
override fun mangaDetailsParse(response: Response): SManga = throw UnsupportedOperationException("not used")
|
||||
private fun mangaDetailsParse(response: Response, incomplete: SManga): SManga {
|
||||
val document = response.asJsoup()
|
||||
val allLinks = document.getAllUrlElements("a", "href") { it.isPSUrl() }
|
||||
|
||||
val thumb: Element? = document.select("img").firstOrNull { img ->
|
||||
img.attr("onerror").let {
|
||||
it.contains(emptyImageURLAbsolute) ||
|
||||
it.contains(emptyImageURLRelative)
|
||||
}
|
||||
}
|
||||
|
||||
val authors: Map<Element, NormalizedURL> = allLinks.filter { (_, url) ->
|
||||
url.queryParameterNames.contains("author")
|
||||
}
|
||||
|
||||
val artists: Map<Element, NormalizedURL> = allLinks.filter { (_, url) ->
|
||||
url.queryParameterNames.contains("artist")
|
||||
}
|
||||
|
||||
val statuses: Map<Element, NormalizedURL> = allLinks.filter { (_, url) ->
|
||||
url.queryParameterNames.contains("status")
|
||||
}
|
||||
|
||||
val origins: Map<Element, NormalizedURL> = allLinks.filter { (_, url) ->
|
||||
url.queryParameterNames.contains("origin")
|
||||
}
|
||||
|
||||
val genres: Map<Element, NormalizedURL> = allLinks.filter { (_, url) ->
|
||||
url.pathStartsWith(listOf("genre"))
|
||||
}
|
||||
|
||||
val description = document.select("#descriptionCollapse").joinToString("\n-----\n", postfix = "\n") { it.wholeText() }
|
||||
|
||||
val alerts = document.select(".alert, .alert-info")
|
||||
.filter(
|
||||
predicate = {
|
||||
it.parents().none { parent ->
|
||||
parent.attr("style")
|
||||
.contains(displayNoneMatcher)
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
val userRating = document.select("#ratings")
|
||||
.firstOrNull()
|
||||
?.children()
|
||||
?.count { it.hasClass("text-warning") }
|
||||
?.takeIf { it > 0 }
|
||||
val data: DataExtractor.PSBookDetails = extractor.bookDetails
|
||||
|
||||
return SManga.create().apply {
|
||||
url = incomplete.url
|
||||
title = incomplete.title
|
||||
thumbnail_url = thumb?.imgNormalizedURL()?.rawAbsolute ?: incomplete.thumbnail_url
|
||||
url = data.book.bookUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${data.book.bookUrl}" }
|
||||
title = data.book.rawTitle
|
||||
thumbnail_url = data.book.thumbnail.toUri().toASCIIString()
|
||||
|
||||
author = authors.keys.joinToString(", ") { it.text() }
|
||||
artist = artists.keys.joinToString(", ") { it.text() }
|
||||
status = when (statuses.keys.joinToString("") { it.text().trim() }.lowercase(Locale.US)) {
|
||||
author = data.detailsTable[DataExtractor.BookDetail.AUTHOR]
|
||||
artist = data.detailsTable[DataExtractor.BookDetail.ARTIST]
|
||||
status = when (data.detailsTable[DataExtractor.BookDetail.STATUS]?.trim()?.lowercase(Locale.US)) {
|
||||
"ongoing" -> SManga.ONGOING
|
||||
"completed" -> SManga.PUBLISHING_FINISHED
|
||||
"hiatus" -> SManga.ON_HIATUS
|
||||
|
@ -264,180 +459,133 @@ class ProjectSuki : HttpSource(), ConfigurableSource {
|
|||
else -> SManga.UNKNOWN
|
||||
}
|
||||
|
||||
this.description = buildString {
|
||||
if (alerts.isNotEmpty()) {
|
||||
description = buildString {
|
||||
if (data.alertData.isNotEmpty()) {
|
||||
appendLine("Alerts have been found, refreshing the manga later might help in removing them.")
|
||||
appendLine()
|
||||
|
||||
alerts.forEach { alert ->
|
||||
var appendedSomething = false
|
||||
alert.select("h4").singleOrNull()?.let {
|
||||
appendLine(it.text())
|
||||
appendedSomething = true
|
||||
}
|
||||
alert.select("p").singleOrNull()?.let {
|
||||
appendLine(it.text())
|
||||
appendedSomething = true
|
||||
}
|
||||
if (!appendedSomething) {
|
||||
appendLine(alert.text())
|
||||
}
|
||||
}
|
||||
|
||||
appendLine()
|
||||
data.alertData.forEach {
|
||||
appendLine(it)
|
||||
appendLine()
|
||||
}
|
||||
|
||||
appendLine(description)
|
||||
|
||||
fun appendToDescription(by: String, data: String?) {
|
||||
if (data != null) append(by).appendLine(data)
|
||||
appendLine()
|
||||
}
|
||||
|
||||
appendToDescription("User Rating: ", """${userRating ?: "?"}/5""")
|
||||
appendToDescription("Authors: ", author)
|
||||
appendToDescription("Artists: ", artist)
|
||||
appendToDescription("Status: ", statuses.keys.joinToString(", ") { it.text() })
|
||||
appendToDescription("Origin: ", origins.keys.joinToString(", ") { it.text() })
|
||||
appendToDescription("Genres: ", genres.keys.joinToString(", ") { it.text() })
|
||||
}
|
||||
appendLine(data.description)
|
||||
appendLine()
|
||||
|
||||
this.update_strategy = if (status != SManga.CANCELLED) UpdateStrategy.ALWAYS_UPDATE else UpdateStrategy.ONLY_FETCH_ONCE
|
||||
this.genre = buildList {
|
||||
addAll(genres.keys.map { it.text() })
|
||||
origins.values.forEach { url ->
|
||||
when (url.queryParameter("origin")) {
|
||||
"kr" -> add("Manhwa")
|
||||
"cn" -> add("Manhua")
|
||||
"jp" -> add("Manga")
|
||||
}
|
||||
}
|
||||
}.joinToString(", ")
|
||||
data.detailsTable.forEach { (detail, value) ->
|
||||
append(detail.display)
|
||||
append(" ")
|
||||
append(value.trim())
|
||||
|
||||
appendLine()
|
||||
}
|
||||
}
|
||||
|
||||
private val chapterHeaderMatcher = """chapters?""".toRegex()
|
||||
private val groupHeaderMatcher = """groups?""".toRegex()
|
||||
private val dateHeaderMatcher = """added|date""".toRegex()
|
||||
private val languageHeaderMatcher = """language""".toRegex()
|
||||
private val chapterNumberMatcher = """[Cc][Hh][Aa][Pp][Tt][Ee][Rr]\s*(\d+)(?:\s*[.,-]\s*(\d+))?""".toRegex()
|
||||
private val looseNumberMatcher = """(\d+)(?:\s*[.,-]\s*(\d+))?""".toRegex()
|
||||
update_strategy = when (status) {
|
||||
SManga.CANCELLED, SManga.PUBLISHING_FINISHED -> UpdateStrategy.ONLY_FETCH_ONCE
|
||||
else -> UpdateStrategy.ALWAYS_UPDATE
|
||||
}
|
||||
|
||||
genre = data.detailsTable[DataExtractor.BookDetail.GENRE]!!
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles the [Response] given by [chapterListRequest]'s [Request].
|
||||
* [HttpSource]'s inheritors will have a default [chapterListRequest] that asks for:
|
||||
* ```
|
||||
* GET(baseUrl + manga.url, headers)
|
||||
* ```
|
||||
* You can override [chapterListRequest] if this is not the case for you.
|
||||
*
|
||||
* Note that you should use [SChapter.create] instead of implementing your own version of [SChapter].
|
||||
*
|
||||
* The chapters list appears in the app from top to bottom (with the default source sort),
|
||||
* be careful of the direction in which you sort it!
|
||||
*/
|
||||
override fun chapterListParse(response: Response): List<SChapter> {
|
||||
val document = response.asJsoup()
|
||||
val chaptersTable = document.select("table").firstOrNull { it.containsReadLinks() } ?: return emptyList()
|
||||
val document: Document = response.asJsoup()
|
||||
val extractor = DataExtractor(document)
|
||||
val bookChapters: Map<ScanGroup, List<DataExtractor.BookChapter>> = extractor.bookChapters
|
||||
|
||||
val thead: Element = chaptersTable.select("thead").firstOrNull() ?: return emptyList()
|
||||
val tbody: Element = chaptersTable.select("tbody").firstOrNull() ?: return emptyList()
|
||||
val blLangs: Set<String> = preferences.blacklistedLanguages()
|
||||
val wlLangs: Set<String> = preferences.whitelistedLanguages()
|
||||
|
||||
val columnTypes = thead.select("tr").firstOrNull()?.children()?.select("td") ?: return emptyList()
|
||||
val textTypes = columnTypes.map { it.text().lowercase(Locale.US) }
|
||||
val normalSize = textTypes.size
|
||||
|
||||
val chaptersIndex: Int = textTypes.indexOfFirst { it.matches(chapterHeaderMatcher) }.takeIf { it >= 0 } ?: return emptyList()
|
||||
val dateIndex: Int = textTypes.indexOfFirst { it.matches(dateHeaderMatcher) }.takeIf { it >= 0 } ?: return emptyList()
|
||||
val groupIndex: Int? = textTypes.indexOfFirst { it.matches(groupHeaderMatcher) }.takeIf { it >= 0 }
|
||||
val languageIndex: Int? = textTypes.indexOfFirst { it.matches(languageHeaderMatcher) }.takeIf { it >= 0 }
|
||||
|
||||
val dataRows = tbody.children().select("tr")
|
||||
|
||||
val blLangs = preferences.blacklistedLanguages
|
||||
val wlLangs = preferences.whitelistedLanguages
|
||||
|
||||
return dataRows.mapNotNull chapters@{ tr ->
|
||||
val rowData = tr.children().select("td")
|
||||
|
||||
if (rowData.size != normalSize) {
|
||||
return@chapters null
|
||||
}
|
||||
|
||||
val chapter: Element = rowData[chaptersIndex]
|
||||
val date: Element = rowData[dateIndex]
|
||||
val group: Element? = groupIndex?.let(rowData::get)
|
||||
val language: Element? = languageIndex?.let(rowData::get)
|
||||
|
||||
language?.text()?.lowercase(Locale.US)?.let { lang ->
|
||||
if (lang in blLangs && lang !in wlLangs) return@chapters null
|
||||
}
|
||||
|
||||
val chapterLink = chapter.select("a").first()!!.attrNormalizedUrl("href")!!
|
||||
|
||||
val relativeURL = chapterLink.rawRelative ?: return@chapters null
|
||||
|
||||
SChapter.create().apply {
|
||||
chapter_number = chapter.text()
|
||||
.let { (chapterNumberMatcher.find(it) ?: looseNumberMatcher.find(it)) }
|
||||
?.let { result ->
|
||||
val integral = result.groupValues[1]
|
||||
val fractional = result.groupValues.getOrNull(2)
|
||||
|
||||
"""${integral}$fractional""".toFloat()
|
||||
} ?: -1f
|
||||
|
||||
url = relativeURL
|
||||
scanlator = group?.text() ?: "<UNKNOWN>"
|
||||
name = chapter.text()
|
||||
date_upload = date.text().parseDate()
|
||||
}
|
||||
}.toList()
|
||||
}
|
||||
|
||||
override fun imageUrlParse(response: Response) = throw UnsupportedOperationException("Not used")
|
||||
|
||||
private val callpageUrl = """https://projectsuki.com/callpage"""
|
||||
private val jsonMediaType = "application/json;charset=UTF-8".toMediaType()
|
||||
override fun fetchPageList(chapter: SChapter): Observable<List<Page>> {
|
||||
// chapter.url is /read/<bookid>/<chapterid>/...
|
||||
val url = chapter.url.toNormalURL() ?: return Observable.just(emptyList())
|
||||
|
||||
val bookid = url.pathSegments[1] // <bookid>
|
||||
val chapterid = url.pathSegments[2] // <chapterid>
|
||||
|
||||
val callpageHeaders = headersBuilder()
|
||||
.add("X-Requested-With", "XMLHttpRequest")
|
||||
.add("Content-Type", "application/json;charset=UTF-8")
|
||||
.build()
|
||||
|
||||
val callpageBody = Json.encodeToString(
|
||||
mapOf(
|
||||
"bookid" to bookid,
|
||||
"chapterid" to chapterid,
|
||||
"first" to "true",
|
||||
),
|
||||
).toRequestBody(jsonMediaType)
|
||||
|
||||
return client.newCall(
|
||||
POST(callpageUrl, callpageHeaders, callpageBody),
|
||||
).asObservableSuccess()
|
||||
.map { response ->
|
||||
callpageParse(chapter, response)
|
||||
}
|
||||
}
|
||||
|
||||
@Suppress("UNUSED_PARAMETER")
|
||||
private fun callpageParse(chapter: SChapter, response: Response): List<Page> {
|
||||
// response contains the html src with images
|
||||
val src = Json.parseToJsonElement(response.body.string()).jsonObject["src"]?.jsonPrimitive?.content ?: return emptyList()
|
||||
val images = Jsoup.parseBodyFragment(src).select("img")
|
||||
// images urls are /images/gallery/<bookid>/<uuid>/<pagenum>? (empty query for some reason)
|
||||
val urls = images.mapNotNull { it.attrNormalizedUrl("src") }
|
||||
if (urls.isEmpty()) return emptyList()
|
||||
|
||||
val anUrl = urls.random()
|
||||
val pageNums = urls.mapTo(ArrayList()) { it.pathSegments[4] }
|
||||
pageNums += "001"
|
||||
|
||||
fun makeURL(pageNum: String) = anUrl.newBuilder()
|
||||
.setPathSegment(anUrl.pathSegments.lastIndex, pageNum)
|
||||
.build()
|
||||
|
||||
return pageNums.distinct().sortedBy { it.toInt() }.mapIndexed { index, number ->
|
||||
Page(
|
||||
index,
|
||||
"",
|
||||
makeURL(number).rawAbsolute,
|
||||
return bookChapters.asSequence()
|
||||
.flatMap { (_, chapters) -> chapters }
|
||||
.filter { it.chapterLanguage !in blLangs }
|
||||
.filter { wlLangs.isEmpty() || it.chapterLanguage == UNKNOWN_LANGUAGE || it.chapterLanguage in wlLangs }
|
||||
.toList()
|
||||
.sortedWith(
|
||||
compareByDescending<DataExtractor.BookChapter> { chapter -> chapter.chapterNumber }
|
||||
.thenBy { chapter -> chapter.chapterGroup }
|
||||
.thenBy { chapter -> chapter.chapterLanguage },
|
||||
)
|
||||
}.distinctBy { it.imageUrl }
|
||||
.map { bookChapter ->
|
||||
SChapter.create().apply {
|
||||
url = bookChapter.chapterUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${bookChapter.chapterUrl}" }
|
||||
name = bookChapter.chapterTitle
|
||||
date_upload = bookChapter.chapterDateAdded?.time ?: 0L
|
||||
scanlator = """${bookChapter.chapterGroup} | ${bookChapter.chapterLanguage.replaceFirstChar(Char::uppercaseChar)}"""
|
||||
chapter_number = bookChapter.chapterNumber!!.let { (main, sub) ->
|
||||
// no fractional part, log(0) is -Inf (technically undefined)
|
||||
if (sub == 0u) return@let main.toFloat()
|
||||
|
||||
val subD = sub.toDouble()
|
||||
// 1 + floor(log10(subD)) finds the number of digits (in base 10) "subD" has
|
||||
// see https://www.wolframalpha.com/input?i=LogLinearPlot+y%3D1+%2B+floor%28log10%28x%29%29%2C+where+x%3D1+to+10%5E9
|
||||
val digits: Double = 1.0 + floor(log10(subD))
|
||||
val fractional: Double = subD / 10.0.pow(digits)
|
||||
// this basically creates a float that has "main" as integral part and "sub" as fractional part
|
||||
// see https://www.wolframalpha.com/input?i=LogLinearPlot+y%3Dx+%2F+10%5E%281+%2B+floor%28log10%28x%29%29%29%2C+where+x%3D0+to+10%5E9
|
||||
// the lines look curved because it's a logarithmic plot in the x-axis, but they're straight in a linear plot
|
||||
(main.toDouble() + fractional).toFloat()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun pageListParse(response: Response): List<Page> = throw UnsupportedOperationException("not used")
|
||||
/**
|
||||
* Usually using [pageListRequest] and [pageListParse] should be enough,
|
||||
* but in this case we override the method to directly ask the server for the chapter pages (images).
|
||||
*
|
||||
* When constructing a [Page] there are 4 properties you need to consider:
|
||||
* - [index][Page.index] -> **ignored**: the list of pages should be returned already sorted!
|
||||
* - [url][Page.url] -> by default used by [imageUrlRequest] to create a `GET(page.url, headers)` [Request].
|
||||
* Of which the [Response] will be given to [imageUrlParse], responsible for retrieving the value of [Page.imageUrl].
|
||||
* This property should be left blank if [Page.imageUrl] can already be extracted at this stage.
|
||||
* - [imageUrl][Page.imageUrl] -> by default used by [imageRequest] to create a `GET(page.imageUrl, headers)` [Request].
|
||||
* - [uri][Page.uri] -> **DEPRECATED**: do not use
|
||||
*/
|
||||
override fun fetchPageList(chapter: SChapter): Observable<List<Page>> {
|
||||
val pathMatch: PathMatchResult = """${homepageUri.toASCIIString()}/${chapter.url}""".toHttpUrl().matchAgainst(chapterUrlPattern)
|
||||
if (!pathMatch.doesMatch) {
|
||||
reportErrorToUser {
|
||||
"chapter url ${chapter.url} does not match expected pattern"
|
||||
}
|
||||
}
|
||||
|
||||
return client.newCall(ProjectSukiAPI.chapterPagesRequest(json, headers, pathMatch["bookid"]!!.value, pathMatch["chapterid"]!!.value))
|
||||
.asObservableSuccess()
|
||||
.map { ProjectSukiAPI.parseChapterPagesResponse(json, it) }
|
||||
}
|
||||
|
||||
/**
|
||||
* Not used in this extension, as [Page.imageUrl] is set directly.
|
||||
*/
|
||||
override fun imageUrlParse(response: Response): String = reportErrorToUser {
|
||||
// give a hint on who called this method
|
||||
"invalid ${Thread.currentThread().stackTrace.take(3)}"
|
||||
}
|
||||
|
||||
/**
|
||||
* Not used in this extension, as we override [fetchPageList] to modify the default behaviour.
|
||||
*/
|
||||
override fun pageListParse(response: Response): List<Page> = reportErrorToUser {
|
||||
// give a hint on who called this method
|
||||
"invalid ${Thread.currentThread().stackTrace.take(3)}"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,353 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import android.icu.text.BreakIterator
|
||||
import android.icu.text.Collator
|
||||
import android.icu.text.RuleBasedCollator
|
||||
import android.icu.text.StringSearch
|
||||
import android.os.Build
|
||||
import eu.kanade.tachiyomi.network.POST
|
||||
import eu.kanade.tachiyomi.source.model.MangasPage
|
||||
import eu.kanade.tachiyomi.source.model.Page
|
||||
import eu.kanade.tachiyomi.source.model.SManga
|
||||
import kotlinx.serialization.SerialName
|
||||
import kotlinx.serialization.Serializable
|
||||
import kotlinx.serialization.encodeToString
|
||||
import kotlinx.serialization.json.Json
|
||||
import kotlinx.serialization.json.JsonElement
|
||||
import kotlinx.serialization.json.JsonObject
|
||||
import kotlinx.serialization.json.JsonPrimitive
|
||||
import okhttp3.Headers
|
||||
import okhttp3.HttpUrl
|
||||
import okhttp3.HttpUrl.Companion.toHttpUrlOrNull
|
||||
import okhttp3.MediaType.Companion.toMediaType
|
||||
import okhttp3.Request
|
||||
import okhttp3.RequestBody
|
||||
import okhttp3.RequestBody.Companion.toRequestBody
|
||||
import okhttp3.Response
|
||||
import org.jsoup.Jsoup
|
||||
import org.jsoup.nodes.Element
|
||||
import java.text.StringCharacterIterator
|
||||
|
||||
/**
|
||||
* @see EXTENSION_INFO Found in ProjectSuki.kt
|
||||
*/
|
||||
@Suppress("unused")
|
||||
private inline val INFO: Nothing get() = error("INFO")
|
||||
|
||||
internal val callpageUrl = homepageUrl.newBuilder().addPathSegment("callpage").build()
|
||||
internal val apiBookSearchUrl = homepageUrl.newBuilder()
|
||||
.addPathSegment("api")
|
||||
.addPathSegment("book")
|
||||
.addPathSegment("search")
|
||||
.build()
|
||||
|
||||
/**
|
||||
* Json [MIME/media type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types)
|
||||
*/
|
||||
internal val jsonMediaType = "application/json;charset=UTF-8".toMediaType()
|
||||
|
||||
private val imageExtensions = setOf(".jpg", ".png", ".jpeg", ".webp", ".gif", ".avif", ".tiff")
|
||||
private val simpleSrcVariants = listOf("src", "data-src", "data-lazy-src")
|
||||
|
||||
/**
|
||||
* Function that tries to extract the image URL from some known ways to store that information.
|
||||
*/
|
||||
fun Element.imageSrc(): HttpUrl? {
|
||||
simpleSrcVariants.forEach { variant ->
|
||||
if (hasAttr(variant)) {
|
||||
return attr("abs:$variant").toHttpUrlOrNull()
|
||||
}
|
||||
}
|
||||
|
||||
if (hasAttr("srcset")) {
|
||||
return attr("abs:srcset").substringBefore(" ").toHttpUrlOrNull()
|
||||
}
|
||||
|
||||
return attributes().firstOrNull {
|
||||
it.key.contains("src") && imageExtensions.any { ext -> it.value.contains(ext) }
|
||||
}?.value?.substringBefore(" ")?.toHttpUrlOrNull()
|
||||
}
|
||||
|
||||
internal typealias BookTitle = String
|
||||
|
||||
/**
|
||||
* Singleton responsible for handling API communications with Project Suki's server.
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
object ProjectSukiAPI {
|
||||
|
||||
private inline fun <reified T : Any> Any.tryAs(): T? = this as? T
|
||||
|
||||
/**
|
||||
* Represents the data that needs to be send to [callpageUrl] to obtain the pages of a chapter.
|
||||
*
|
||||
* @param first Actually represents a boolean, needs to be string, otherwise server is unhappy, seems mostly ignored
|
||||
*/
|
||||
@Serializable
|
||||
data class PagesRequestData(
|
||||
@SerialName("bookid")
|
||||
val bookID: BookID,
|
||||
@SerialName("chapterid")
|
||||
val chapterID: ChapterID,
|
||||
@SerialName("first")
|
||||
val first: String,
|
||||
) {
|
||||
init {
|
||||
if (first != "true" && first != "false") {
|
||||
reportErrorToUser {
|
||||
"PagesRequestData, first was \"$first\""
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a [Request] for the server to send the chapter's pages.
|
||||
*/
|
||||
fun chapterPagesRequest(json: Json, headers: Headers, bookID: BookID, chapterID: ChapterID): Request {
|
||||
val newHeaders: Headers = headers.newBuilder()
|
||||
.add("X-Requested-With", "XMLHttpRequest")
|
||||
.add("Content-Type", "application/json;charset=UTF-8")
|
||||
.build()
|
||||
|
||||
val body: RequestBody = json.encodeToString(PagesRequestData(bookID, chapterID, "true"))
|
||||
.toRequestBody(jsonMediaType)
|
||||
|
||||
return POST(callpageUrl.toUri().toASCIIString(), newHeaders, body)
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles the [Response] returned from [chapterPagesRequest]'s [call][okhttp3.OkHttpClient.newCall].
|
||||
*/
|
||||
fun parseChapterPagesResponse(json: Json, response: Response): List<Page> {
|
||||
// response is a json object containing 2 elements
|
||||
// chapter_id: seems to be the same as chapterid, I'm not really sure what it's supposed to be
|
||||
// src: source html that will be appended in the page
|
||||
// we're interested in src, which will be a simple string containing a list of very verbose <img> tags
|
||||
// most important of which is the src tag, which should actually be an absolute url,
|
||||
// but we can handle both cases at once anyways.
|
||||
// urls are of the form /images/gallery/<bookid>/<uuid>/<page>
|
||||
// the uuid is a 128-bit number in base 16 (hex), most likely there to manage edits and versions.
|
||||
|
||||
// page 001 is included in the response
|
||||
val rawSrc: String = json.runCatching { parseToJsonElement(response.body.string()) }
|
||||
.getOrNull()
|
||||
?.tryAs<JsonObject>()
|
||||
?.get("src")
|
||||
?.tryAs<JsonPrimitive>()
|
||||
?.content ?: reportErrorToUser {
|
||||
"chapter pages aren't in the expected format!"
|
||||
}
|
||||
|
||||
// we can handle relative urls by specifying manually the location of the "document"
|
||||
val srcFragment: Element = Jsoup.parseBodyFragment(rawSrc, homepageUri.toASCIIString())
|
||||
|
||||
val urls: Map<HttpUrl, PathMatchResult> = srcFragment.allElements
|
||||
.asSequence()
|
||||
.mapNotNull { it.imageSrc() }
|
||||
.associateWith { it.matchAgainst(pageUrlPattern) } // create match result
|
||||
.filterValues { it.doesMatch } // make sure they are the urls we expect
|
||||
|
||||
if (urls.isEmpty()) {
|
||||
reportErrorToUser {
|
||||
"chapter pages URLs aren't in the expected format!"
|
||||
}
|
||||
}
|
||||
|
||||
return urls.entries
|
||||
.sortedBy { (_, match) -> match["pagenum"]!!.value.toUInt() } // they should already be sorted, but you're never too sure
|
||||
.mapIndexed { index, (url, _) ->
|
||||
Page(
|
||||
index = index,
|
||||
url = "", // skip fetchImageUrl
|
||||
imageUrl = url.toUri().toASCIIString(),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/** Represents the data that needs to be send to [apiBookSearchUrl] to obtain the complete list of books that have chapters. */
|
||||
@Serializable
|
||||
data class SearchRequestData(
|
||||
@SerialName("hash")
|
||||
val hash: String?,
|
||||
)
|
||||
|
||||
/**
|
||||
* Creates a [Request] for the server to send the books.
|
||||
*/
|
||||
fun bookSearchRequest(json: Json, headers: Headers): Request {
|
||||
val newHeaders: Headers = headers.newBuilder()
|
||||
.add("X-Requested-With", "XMLHttpRequest")
|
||||
.add("Content-Type", "application/json;charset=UTF-8")
|
||||
.add("Referer", homepageUrl.newBuilder().addPathSegment("browse").build().toUri().toASCIIString())
|
||||
.build()
|
||||
|
||||
val body: RequestBody = json.encodeToString(SearchRequestData(null))
|
||||
.toRequestBody(jsonMediaType)
|
||||
|
||||
return POST(apiBookSearchUrl.toUri().toASCIIString(), newHeaders, body)
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles the [Response] returned from [parseBookSearchResponse]'s [call][okhttp3.OkHttpClient.newCall].
|
||||
*/
|
||||
fun parseBookSearchResponse(json: Json, response: Response): Map<BookID, BookTitle> {
|
||||
val data: JsonObject = json.runCatching { parseToJsonElement(response.body.string()) }
|
||||
.getOrNull()
|
||||
?.tryAs<JsonObject>()
|
||||
?.get("data")
|
||||
?.tryAs<JsonObject>() ?: reportErrorToUser {
|
||||
"books data isn't in the expected format!"
|
||||
}
|
||||
|
||||
val refined: Map<BookID, BookTitle> = buildMap {
|
||||
data.forEach { (id: BookID, valueObj: JsonElement) ->
|
||||
val title: BookTitle = valueObj.tryAs<JsonObject>()
|
||||
?.get("value")
|
||||
?.tryAs<JsonPrimitive>()
|
||||
?.content ?: reportErrorToUser {
|
||||
"books data isn't in the expected format!"
|
||||
}
|
||||
|
||||
this[id] = title
|
||||
}
|
||||
}
|
||||
|
||||
return refined
|
||||
}
|
||||
}
|
||||
|
||||
private val alphaNumericRegex = """\p{Alnum}+""".toRegex(RegexOption.IGNORE_CASE)
|
||||
|
||||
/**
|
||||
* Creates a [MangasPage] containing a sorted list of mangas from best match to words.
|
||||
*
|
||||
* If Even a single "word" from [searchQuery] matches, then the manga will be included,
|
||||
* but sorting is done based on the amount of matches.
|
||||
*/
|
||||
internal fun Map<BookID, BookTitle>.toMangasPage(searchQuery: String, useSimpleMode: Boolean): MangasPage {
|
||||
data class Match(val bookID: BookID, val title: BookTitle, val count: Int) {
|
||||
val bookUrl: HttpUrl = homepageUrl.newBuilder()
|
||||
.addPathSegment("book")
|
||||
.addPathSegment(bookID)
|
||||
.build()
|
||||
}
|
||||
|
||||
when {
|
||||
useSimpleMode -> {
|
||||
// simple search, possibly faster
|
||||
val words: Set<String> = alphaNumericRegex.findAll(searchQuery).mapTo(HashSet()) { it.value }
|
||||
|
||||
val matches: Map<BookID, Match> = mapValues { (bookID, bookTitle) ->
|
||||
val matchesCount: Int = words.sumOf { word ->
|
||||
var count = 0
|
||||
var idx = 0
|
||||
|
||||
while (true) {
|
||||
val found = bookTitle.indexOf(word, idx, ignoreCase = true)
|
||||
if (found < 0) break
|
||||
|
||||
idx = found + 1
|
||||
count++
|
||||
}
|
||||
|
||||
count
|
||||
}
|
||||
|
||||
Match(bookID, bookTitle, matchesCount)
|
||||
}.filterValues { it.count > 0 }
|
||||
|
||||
return MangasPage(
|
||||
mangas = matches.entries
|
||||
.sortedWith(compareBy({ -it.value.count }, { it.value.title }))
|
||||
.map { (bookID, match: Match) ->
|
||||
SManga.create().apply {
|
||||
title = match.title
|
||||
url = match.bookUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${match.bookUrl}" }
|
||||
thumbnail_url = bookThumbnailUrl(bookID, "").toUri().toASCIIString()
|
||||
}
|
||||
},
|
||||
hasNextPage = false,
|
||||
)
|
||||
}
|
||||
|
||||
Build.VERSION.SDK_INT >= Build.VERSION_CODES.N -> {
|
||||
// use ICU, better
|
||||
|
||||
val searchWords: Set<String> = BreakIterator.getWordInstance().run {
|
||||
text = StringCharacterIterator(searchQuery)
|
||||
|
||||
var left = first()
|
||||
var right = next()
|
||||
|
||||
buildSet {
|
||||
while (right != BreakIterator.DONE) {
|
||||
if (ruleStatus != BreakIterator.WORD_NONE) {
|
||||
add(searchQuery.substring(left, right))
|
||||
}
|
||||
left = right
|
||||
right = next()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
val stringSearch = StringSearch(
|
||||
/* pattern = */ "dummy",
|
||||
/* target = */ StringCharacterIterator("dummy"),
|
||||
/* collator = */
|
||||
(Collator.getInstance() as RuleBasedCollator).apply {
|
||||
isCaseLevel = true
|
||||
strength = Collator.PRIMARY
|
||||
decomposition = Collator.CANONICAL_DECOMPOSITION
|
||||
},
|
||||
)
|
||||
|
||||
val matches: Map<BookID, Match> = mapValues { (bookID, bookTitle) ->
|
||||
stringSearch.target = StringCharacterIterator(bookTitle)
|
||||
|
||||
val matchesCount: Int = searchWords.sumOf { word ->
|
||||
val search: StringSearch = stringSearch.apply {
|
||||
this.pattern = word
|
||||
}
|
||||
|
||||
var count = 0
|
||||
var idx = search.first()
|
||||
while (idx != StringSearch.DONE) {
|
||||
count++
|
||||
idx = search.next()
|
||||
}
|
||||
|
||||
count
|
||||
}
|
||||
|
||||
Match(bookID, bookTitle, matchesCount)
|
||||
}.filterValues { it.count > 0 }
|
||||
|
||||
return MangasPage(
|
||||
mangas = matches.entries
|
||||
.sortedWith(compareBy({ -it.value.count }, { it.value.title }))
|
||||
.map { (bookID, match: Match) ->
|
||||
SManga.create().apply {
|
||||
title = match.title
|
||||
url = match.bookUrl.rawRelative ?: reportErrorToUser { "Could not relativize ${match.bookUrl}" }
|
||||
thumbnail_url = bookThumbnailUrl(bookID, "").toUri().toASCIIString()
|
||||
}
|
||||
},
|
||||
hasNextPage = false,
|
||||
)
|
||||
}
|
||||
|
||||
else -> error(
|
||||
buildString {
|
||||
append("Please enable ")
|
||||
append(ProjectSukiFilters.SearchMode.SIMPLE)
|
||||
append(" Search Mode: ")
|
||||
append(ProjectSukiFilters.SearchMode.SMART)
|
||||
append(" search requires Android API version >= 24, but ")
|
||||
append(Build.VERSION.SDK_INT)
|
||||
append(" was found!")
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,157 @@
|
|||
@file:Suppress("CanSealedSubClassBeObject")
|
||||
|
||||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import eu.kanade.tachiyomi.source.model.Filter
|
||||
import okhttp3.HttpUrl
|
||||
|
||||
/**
|
||||
* @see EXTENSION_INFO Found in ProjectSuki.kt
|
||||
*/
|
||||
@Suppress("unused")
|
||||
private inline val INFO: Nothing get() = error("INFO")
|
||||
|
||||
internal val newlineRegex = """\R""".toRegex(RegexOption.IGNORE_CASE)
|
||||
|
||||
/**
|
||||
* Handler for Project Suki's applicable [filters][Filter]
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
@Suppress("NOTHING_TO_INLINE")
|
||||
object ProjectSukiFilters {
|
||||
|
||||
internal sealed interface ProjectSukiFilter {
|
||||
fun HttpUrl.Builder.applyFilter()
|
||||
val headers: List<Filter.Header> get() = emptyList()
|
||||
}
|
||||
|
||||
private inline fun headers(block: () -> String): List<Filter.Header> = block().split(newlineRegex).map { Filter.Header(it) }
|
||||
|
||||
private suspend inline fun <T> SequenceScope<Filter<*>>.addFilter(filter: T) where T : Filter<*>, T : ProjectSukiFilter {
|
||||
yieldAll(filter.headers)
|
||||
yield(filter)
|
||||
}
|
||||
|
||||
@Suppress("UNUSED_PARAMETER")
|
||||
fun headersSequence(preferences: ProjectSukiPreferences): Sequence<Filter.Header> = sequenceOf()
|
||||
|
||||
fun filtersSequence(preferences: ProjectSukiPreferences): Sequence<Filter<*>> = sequence {
|
||||
addFilter(SearchModeFilter(preferences.defaultSearchMode()))
|
||||
yield(Filter.Separator())
|
||||
yield(Filter.Header("All filters below will only work in Full Site mode."))
|
||||
addFilter(Origin())
|
||||
addFilter(Status())
|
||||
yield(Filter.Separator())
|
||||
addFilter(Author())
|
||||
addFilter(Artist())
|
||||
}
|
||||
|
||||
@Suppress("UNUSED_PARAMETER")
|
||||
fun footersSequence(preferences: ProjectSukiPreferences): Sequence<Filter.Header> = sequenceOf()
|
||||
|
||||
/** Project Suki requires an extra `adv=1` query parameter when using these filters */
|
||||
private inline fun HttpUrl.Builder.ensureAdv(): HttpUrl.Builder = setQueryParameter("adv", "1")
|
||||
|
||||
enum class StatusValue(val display: String, val query: String) {
|
||||
ANY("Any", ""),
|
||||
ONGOING("Ongoing", "ongoing"),
|
||||
COMPLETED("Completed", "completed"),
|
||||
HIATUS("Hiatus", "hiatus"),
|
||||
CANCELLED("Cancelled", "cancelled"),
|
||||
;
|
||||
|
||||
override fun toString(): String = display
|
||||
|
||||
companion object {
|
||||
private val values: Array<StatusValue> = values()
|
||||
operator fun get(ordinal: Int): StatusValue = values[ordinal]
|
||||
}
|
||||
}
|
||||
|
||||
enum class OriginValue(val display: String, val query: String) {
|
||||
ANY("Any", ""),
|
||||
KOREA("Korea", "kr"),
|
||||
CHINA("China", "cn"),
|
||||
JAPAN("Japan", "jp"),
|
||||
;
|
||||
|
||||
override fun toString(): String = display
|
||||
|
||||
companion object {
|
||||
private val values: Array<OriginValue> = OriginValue.values()
|
||||
operator fun get(ordinal: Int): OriginValue = values[ordinal]
|
||||
}
|
||||
}
|
||||
|
||||
enum class SearchMode(val display: String, val description: SearchMode.() -> String) {
|
||||
SMART("Smart", { "Searches for books that have chapters using Unicode ICU Collation and utilities, should work for queries in all languages." }),
|
||||
SIMPLE("Simple", { "Ideally the same as $SMART. Necessary for Android API < 24. MIGHT make searches faster. Might be unreliable for non-english characters." }),
|
||||
FULL_SITE("Full Site", { "Executes a /search web query on the website. Might return non-relevant results without chapters." }),
|
||||
;
|
||||
|
||||
override fun toString(): String = display
|
||||
|
||||
companion object {
|
||||
private val values: Array<SearchMode> = SearchMode.values()
|
||||
operator fun get(ordinal: Int): SearchMode = values[ordinal]
|
||||
}
|
||||
}
|
||||
|
||||
class SearchModeFilter(default: SearchMode) : Filter.Select<SearchMode>("Search Mode", SearchMode.values(), state = default.ordinal), ProjectSukiFilter {
|
||||
override val headers: List<Header> = headers {
|
||||
"""
|
||||
See Extensions > Project Suki > Gear icon
|
||||
for differences and for how to set the default.
|
||||
""".trimIndent()
|
||||
}
|
||||
|
||||
override fun HttpUrl.Builder.applyFilter() = Unit
|
||||
}
|
||||
|
||||
class Author : Filter.Text("Author"), ProjectSukiFilter {
|
||||
override val headers: List<Header> = headers {
|
||||
"""
|
||||
Search by a single author:
|
||||
""".trimIndent()
|
||||
}
|
||||
|
||||
override fun HttpUrl.Builder.applyFilter() {
|
||||
when {
|
||||
state.isNotBlank() -> ensureAdv().addQueryParameter("author", state)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
class Artist : Filter.Text("Artist"), ProjectSukiFilter {
|
||||
override val headers: List<Header> = headers {
|
||||
"""
|
||||
Search by a single artist:
|
||||
""".trimIndent()
|
||||
}
|
||||
|
||||
override fun HttpUrl.Builder.applyFilter() {
|
||||
when {
|
||||
state.isNotBlank() -> ensureAdv().addQueryParameter("artist", state)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
class Status : Filter.Select<StatusValue>("Status", StatusValue.values()), ProjectSukiFilter {
|
||||
override fun HttpUrl.Builder.applyFilter() {
|
||||
when (val state = StatusValue[state /* ordinal */]) {
|
||||
StatusValue.ANY -> {} // default, do nothing
|
||||
else -> ensureAdv().addQueryParameter("status", state.query)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
class Origin : Filter.Select<OriginValue>("Origin", OriginValue.values()), ProjectSukiFilter {
|
||||
override fun HttpUrl.Builder.applyFilter() {
|
||||
when (val state = OriginValue[state /* ordinal */]) {
|
||||
OriginValue.ANY -> {} // default, do nothing
|
||||
else -> ensureAdv().addQueryParameter("origin", state.query)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,117 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import android.app.Application
|
||||
import android.content.SharedPreferences
|
||||
import androidx.preference.EditTextPreference
|
||||
import androidx.preference.ListPreference
|
||||
import androidx.preference.Preference
|
||||
import androidx.preference.PreferenceScreen
|
||||
import eu.kanade.tachiyomi.lib.randomua.addRandomUAPreferenceToScreen
|
||||
import uy.kohesive.injekt.Injekt
|
||||
import uy.kohesive.injekt.api.get
|
||||
import java.util.Locale
|
||||
|
||||
/**
|
||||
* @see EXTENSION_INFO Found in ProjectSuki.kt
|
||||
*/
|
||||
@Suppress("unused")
|
||||
private inline val INFO: Nothing get() = error("INFO")
|
||||
|
||||
/**
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
class ProjectSukiPreferences(id: Long) {
|
||||
|
||||
internal val shared by lazy { Injekt.get<Application>().getSharedPreferences("source_$id", 0x0000) }
|
||||
|
||||
abstract inner class PSPreference<Raw : Any, T : Any>(val preferenceIdentifier: String, val default: Raw) {
|
||||
|
||||
abstract val rawGet: SharedPreferences.(identifier: String, default: Raw) -> Raw
|
||||
abstract fun Raw.transform(): T
|
||||
abstract fun PreferenceScreen.constructPreference(): Preference
|
||||
|
||||
protected inline fun summary(block: () -> String): String = block().trimIndent()
|
||||
|
||||
operator fun invoke(): T = shared.rawGet(preferenceIdentifier, default).transform()
|
||||
}
|
||||
|
||||
val defaultSearchMode = object : PSPreference<String, ProjectSukiFilters.SearchMode>("$SHORT_FORM_ID-default-search-mode", ProjectSukiFilters.SearchMode.SMART.display) {
|
||||
override val rawGet: SharedPreferences.(identifier: String, default: String) -> String = { id, def -> getString(id, def)!! }
|
||||
override fun String.transform(): ProjectSukiFilters.SearchMode = ProjectSukiFilters.SearchMode.values()
|
||||
.firstOrNull { it.display == this } ?: ProjectSukiFilters.SearchMode.SMART
|
||||
|
||||
override fun PreferenceScreen.constructPreference() = ListPreference(context).apply {
|
||||
key = preferenceIdentifier
|
||||
entries = ProjectSukiFilters.SearchMode.values().map { it.display }.toTypedArray()
|
||||
entryValues = ProjectSukiFilters.SearchMode.values().map { it.display }.toTypedArray()
|
||||
setDefaultValue(ProjectSukiFilters.SearchMode.SMART.display)
|
||||
title = "Default search mode"
|
||||
summary = summary {
|
||||
"""
|
||||
Select which Search Mode to use by default. Can be useful for global searches. ${ProjectSukiFilters.SearchMode.SMART} is recommended.
|
||||
- ${ProjectSukiFilters.SearchMode.SMART}: ${ProjectSukiFilters.SearchMode.SMART.run { description() }}
|
||||
- ${ProjectSukiFilters.SearchMode.SIMPLE}: ${ProjectSukiFilters.SearchMode.SIMPLE.run { description() }}
|
||||
- ${ProjectSukiFilters.SearchMode.FULL_SITE}: ${ProjectSukiFilters.SearchMode.FULL_SITE.run { description() }}
|
||||
""".trimIndent()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
val whitelistedLanguages = object : PSPreference<String, Set<String>>("$SHORT_FORM_ID-languages-whitelist", "") {
|
||||
override val rawGet: SharedPreferences.(identifier: String, default: String) -> String = { id, def -> getString(id, def)!! }
|
||||
override fun String.transform(): Set<String> {
|
||||
return split(',')
|
||||
.filter { it.isNotBlank() }
|
||||
.mapTo(HashSet()) { it.trim().lowercase(Locale.US) }
|
||||
}
|
||||
|
||||
override fun PreferenceScreen.constructPreference() = EditTextPreference(context).apply {
|
||||
key = preferenceIdentifier
|
||||
title = "Whitelisted languages"
|
||||
dialogTitle = "Include chapters in the following languages:"
|
||||
dialogMessage = "Enter the languages you want to include by separating them with a comma ',' (e.g. \"English, SPANISH, gReEk\", without quotes (\"))."
|
||||
summary = summary {
|
||||
"""
|
||||
NOTE: You will need to refresh comics that have already been fetched!! (drag down in the comic page in tachiyomi)
|
||||
|
||||
When empty will allow all languages (see blacklisting).
|
||||
It will match the string present in the "Language" column of the chapter (NOT case sensitive).
|
||||
Chapters that do not have a "Language" column, will be listed as "$UNKNOWN_LANGUAGE", which is always whitelisted (see blacklisting).
|
||||
"""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
val blacklistedLanguages = object : PSPreference<String, Set<String>>("$SHORT_FORM_ID-languages-blacklist", "") {
|
||||
override val rawGet: SharedPreferences.(identifier: String, default: String) -> String = { id, def -> getString(id, def)!! }
|
||||
override fun String.transform(): Set<String> {
|
||||
return split(",")
|
||||
.filter { it.isNotBlank() }
|
||||
.mapTo(HashSet()) { it.trim().lowercase(Locale.US) }
|
||||
}
|
||||
|
||||
override fun PreferenceScreen.constructPreference() = EditTextPreference(context).apply {
|
||||
key = preferenceIdentifier
|
||||
title = "Blacklisted languages"
|
||||
dialogTitle = "Exclude chapters in the following languages:"
|
||||
dialogMessage = "Enter the languages you want to exclude by separating them with a comma ',' (e.g. \"English, SPANISH, gReEk\", without quotes (\"))."
|
||||
summary = summary {
|
||||
"""
|
||||
NOTE: You will need to refresh comics that have already been fetched!! (drag down in the comic page in tachiyomi)
|
||||
|
||||
When a language is in BOTH whitelist and blacklist, it will be EXCLUDED.
|
||||
It will match the string present in the "Language" column of the chapter (NOT case sensitive).
|
||||
Chapters that do not have a "Language" column, will be listed as "$UNKNOWN_LANGUAGE", you can exclude them by adding "unknown" to the list (e.g. "Chinese, unknown, Alienese").
|
||||
""".trimIndent()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fun PreferenceScreen.configure() {
|
||||
addRandomUAPreferenceToScreen(this)
|
||||
|
||||
addPreference(defaultSearchMode.run { constructPreference() })
|
||||
addPreference(whitelistedLanguages.run { constructPreference() })
|
||||
addPreference(blacklistedLanguages.run { constructPreference() })
|
||||
}
|
||||
}
|
|
@ -0,0 +1,67 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import android.app.Activity
|
||||
import android.content.ActivityNotFoundException
|
||||
import android.content.Intent
|
||||
import android.os.Bundle
|
||||
import android.util.Log
|
||||
import kotlin.system.exitProcess
|
||||
|
||||
/**
|
||||
* @see EXTENSION_INFO Found in ProjectSuki.kt
|
||||
*/
|
||||
@Suppress("unused")
|
||||
private inline val INFO: Nothing get() = error("INFO")
|
||||
|
||||
/**
|
||||
* `$ps:`
|
||||
*/
|
||||
internal const val INTENT_QUERY_PREFIX: String = """${'$'}$SHORT_FORM_ID:"""
|
||||
|
||||
/**
|
||||
* See [handleIntentAction](https://github.com/tachiyomiorg/tachiyomi/blob/0f9895eec8f5808210f291d1e0ef5cc9f73ccb44/app/src/main/java/eu/kanade/tachiyomi/ui/main/MainActivity.kt#L401)
|
||||
* and [GlobalSearchScreen](https://github.com/tachiyomiorg/tachiyomi/blob/0f9895eec8f5808210f291d1e0ef5cc9f73ccb44/app/src/main/java/eu/kanade/tachiyomi/ui/browse/source/globalsearch/GlobalSearchScreen.kt#L19)
|
||||
* (these are permalinks, search for updated variants).
|
||||
*
|
||||
* See [AndroidManifest.xml](https://developer.android.com/guide/topics/manifest/manifest-intro)
|
||||
* for what URIs this [Activity](https://developer.android.com/guide/components/activities/intro-activities)
|
||||
* can receive.
|
||||
*
|
||||
* For this specific class you can test the activity by doing (see [CONTRIBUTING](https://github.com/tachiyomiorg/tachiyomi-extensions/blob/master/CONTRIBUTING.md#url-intent-filter)):
|
||||
* ```
|
||||
* adb shell am start -d "https://projectsuki.com/search?q=omniscient" -a android.intent.action.VIEW
|
||||
* ```
|
||||
*
|
||||
* @author Federico d'Alonzo <me@npgx.dev>
|
||||
*/
|
||||
class ProjectSukiSearchUrlActivity : Activity() {
|
||||
override fun onCreate(savedInstanceState: Bundle?) {
|
||||
super.onCreate(savedInstanceState)
|
||||
|
||||
if (intent?.data?.pathSegments?.size != 1) {
|
||||
Log.e("PSUrlActivity", "could not handle URI ${intent?.data} from intent $intent")
|
||||
}
|
||||
|
||||
val intent = Intent().apply {
|
||||
// tell tachiyomi we want to search for something
|
||||
action = "eu.kanade.tachiyomi.SEARCH"
|
||||
// "filter" for our own extension instead of doing a global search
|
||||
putExtra("filter", packageName)
|
||||
// value that will be passed onto the "query" parameter in fetchSearchManga
|
||||
putExtra("query", "${INTENT_QUERY_PREFIX}${intent?.data?.query}")
|
||||
}
|
||||
|
||||
try {
|
||||
// actually do the thing
|
||||
startActivity(intent)
|
||||
} catch (e: ActivityNotFoundException) {
|
||||
// tachiyomi isn't installed (?)
|
||||
Log.e("PSUrlActivity", e.toString())
|
||||
}
|
||||
|
||||
// we're done
|
||||
finish()
|
||||
// just for safety
|
||||
exitProcess(0)
|
||||
}
|
||||
}
|
|
@ -1,33 +0,0 @@
|
|||
package eu.kanade.tachiyomi.extension.all.projectsuki
|
||||
|
||||
import android.app.Activity
|
||||
import android.content.ActivityNotFoundException
|
||||
import android.content.Intent
|
||||
import android.os.Bundle
|
||||
import android.util.Log
|
||||
import kotlin.system.exitProcess
|
||||
|
||||
class ProjectSukiUrlActivity : Activity() {
|
||||
override fun onCreate(savedInstanceState: Bundle?) {
|
||||
super.onCreate(savedInstanceState)
|
||||
val pathSegments = intent?.data?.pathSegments
|
||||
if (pathSegments != null && pathSegments.size > 1) {
|
||||
val mainIntent = Intent().apply {
|
||||
action = "eu.kanade.tachiyomi.SEARCH"
|
||||
putExtra("query", "${PS.SEARCH_INTENT_PREFIX}${pathSegments[1]}")
|
||||
putExtra("filter", packageName)
|
||||
}
|
||||
|
||||
try {
|
||||
startActivity(mainIntent)
|
||||
} catch (e: ActivityNotFoundException) {
|
||||
Log.e("PSUrlActivity", e.toString())
|
||||
}
|
||||
} else {
|
||||
Log.e("PSUrlActivity", "could not parse uri from intent $intent")
|
||||
}
|
||||
|
||||
finish()
|
||||
exitProcess(0)
|
||||
}
|
||||
}
|
Loading…
Reference in New Issue