Add support for My Manga Reader CMS sources (many, many sources) (#103)

* Add My Manga Reader CMS sources and generator

Currently supported sources:
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: MangaRoot
- EN: Mangawww Reader
- EN: MangaForLife
- ES: My-mangas.com
- FA: TrinityReader
- ID: Manga Desu
- JA: IchigoBook
- TR: MangAoi

* Add more sources
Code cleanup
Added thumbnail guesser to keyword search
Fix build

Currently supported sources:
- AR: مانجا اون لاين
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: MangaRoot
- EN: Mangawww Reader
- EN: MangaForLife
- EN: Manga Mofo
- EN: H-Manga.moe
- EN: MangaBlue
- EN: Manga Forest
- EN: DManga
- ES: My-mangas.com
- FA: TrinityReader
- FR: Manga-LEL
- FR: Manga Etonnia
- FR: Tous Vos Scans
- ID: Manga Desu
- ID: Komik Mangafire.ID
- ID: MangaOnline
- ID: MangaNesia
- ID: KOMIK.CO.ID
- ID: MangaID
- ID: Indo Manga Reader
- JA: IchigoBook
- JA: Mangaraw Online
- PL: Candy Scans
- PT: Comic Space
- PT: Mangás Yuri
- RU: NAKAMA
- TR: MangAoi
- TR: MangaHanta

* Disable latest updates for sources do not support it

* Latest updates support scanner no longer generates false positives

* Fix source generator being included in APK
Remove sources that went offline

Currently supported sources:
- AR: مانجا اون لاين
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: MangaRoot
- EN: Mangawww Reader
- EN: MangaForLife
- EN: Manga Mofo
- EN: H-Manga.moe
- EN: MangaBlue
- EN: Manga Forest
- EN: DManga
- ES: My-mangas.com
- FA: TrinityReader
- FR: Manga-LEL
- FR: Manga Etonnia
- FR: Tous Vos Scans
- ID: Manga Desu
- ID: MangaOnline
- ID: KOMIK.CO.ID
- ID: MangaID
- JA: Mangaraw Online
- PL: Candy Scans
- PT: Mangás Yuri
- RU: NAKAMA
- TR: MangAoi
- TR: MangaHanta

* Code cleanup
Remove dead sources
Fix announcements being recognized as chapters in some sources

Currently supported sources:
- AR: مانجا اون لاين
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: Mangawww Reader
- EN: MangaForLife
- EN: Manga Mofo
- EN: H-Manga.moe
- EN: MangaBlue
- EN: Manga Forest
- EN: DManga
- ES: My-mangas.com
- FA: TrinityReader
- FR: Manga-LEL
- FR: Manga Etonnia
- FR: Tous Vos Scans
- ID: Manga Desu
- ID: MangaOnline
- ID: KOMIK.CO.ID
- ID: MangaID
- JA: Mangaraw Online
- PL: Candy Scans
- PT: Mangás Yuri
- RU: NAKAMA
- TR: MangAoi
- TR: MangaHanta

* Remove logging from source (as logging library is not available)

* Fix HTML entities not being escaped
Add some new sources and remove obsolete sources

Currently supported sources:
- AR: مانجا اون لاين
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: Mangawww Reader
- EN: MangaForLife
- EN: Manga Spoil
- EN: H-Manga.moe
- EN: DManga
- EN: Chibi Manga Reader
- EN: ZXComic
- ES: My-mangas.com
- FA: TrinityReader
- FR: Manga-LEL
- FR: Manga Etonnia
- ID: Manga Desu
- ID: MangaOnline
- ID: KOMIK.CO.ID
- ID: MangaID
- ID: Manga Seru
- JA: Mangaraw Online
- JA: Mangazuki RAWS
- PL: Candy Scans
- PT: Mangás Yuri
- RU: NAKAMA
- TR: MangAoi
- TR: MangaHanta
- OTHER: HentaiShark

* Remove offline sources

* Extend HttpSource instead of ParsedHttpSource

* Update sources

Currently supported sources:
- AR: مانجا اون لاين
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: MangaForLife
- EN: Manga Spoil
- EN: DManga
- EN: Chibi Manga Reader
- EN: ZXComic
- EN: DB Manga
- EN: Mangacox
- EN: GO Manhwa
- EN: Hentai2Manga
- ES: My-mangas.com
- ES: SOS Scanlation
- FA: TrinityReader
- FR: Manga-LEL
- FR: Scan FR
- ID: Manga Desu
- ID: Komikid
- ID: MangaID
- ID: Manga Seru
- JA: Mangaraw Online
- JA: Mangazuki RAWS
- JA: MangaRAW
- PL: Candy Scans
- PT: Mangás Yuri
- RU: NAKAMA
- RU: AkaiYuhiMun team
- TR: MangAoi
- TR: MangaHanta
- TR: ManhuaTR
- OTHER: HentaiShark

* Change extension name and remove dead sources

Currently supported sources:
- AR: مانجا اون لاين
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: MangaForLife
- EN: Manga Spoil
- EN: DManga
- EN: Chibi Manga Reader
- EN: ZXComic
- EN: Mangacox
- EN: Hentai2Manga
- ES: My-mangas.com
- ES: SOS Scanlation
- FA: TrinityReader
- FR: Manga-LEL
- FR: Scan FR
- ID: Manga Desu
- ID: Komikid
- ID: MangaID
- ID: Manga Seru
- JA: Mangaraw Online
- JA: Mangazuki RAWS
- JA: MangaRAW
- PL: Candy Scans
- PT: Mangás Yuri
- RU: NAKAMA
- TR: MangAoi
- TR: MangaHanta
- TR: ManhuaTR
- OTHER: HentaiShark

* Add tag searching support
Remove dead sources
Enable dead sources that are now online
Add some new sources
Sources are now parsed from JSON (still hardcoded)

Currently supported sources:
- AR: مانجا اون لاين
- AR: Manga FYI
- EN: Read Comics Online
- EN: Fallen Angels Scans
- EN: Mangawww Reader
- EN: MangaForLife
- EN: Manga Spoil
- EN: DManga
- EN: Chibi Manga Reader
- EN: ZXComic
- EN: Mangacox
- EN: KoManga
- EN: Manganimecan
- EN: Hentai2Manga
- EN: White Cloud Pavilion
- EN: 4 Manga
- EN: XYXX.INFO
- ES: My-mangas.com
- ES: SOS Scanlation
- FR: Manga-LEL
- FR: Manga Etonnia
- FR: Scan FR
- FR: ScanFR.com
- FR: Manga FYI
- FR: Mugiwara
- FR: scans-manga
- ID: Manga Desu
- ID: MangaOnline
- ID: Komikid
- ID: MangaID
- ID: Manga Seru
- ID: Manga FYI
- JA: Mangazuki RAWS
- JA: MangaRAW
- PL: Candy Scans
- PL: ToraScans
- PT: Comic Space
- PT: Mangás Yuri
- RU: NAKAMA
- TR: MangAoi
- TR: MangaHanta
- TR: ManhuaTR
- VI: Fallen Angels Scans
- OTHER: HentaiShark

* Update source categories and tags

* Add icon
Remove dead source
This commit is contained in:
Andy Bao 2018-03-17 16:54:00 -04:00 committed by inorichi
parent ab6054944d
commit 318f335bf8
11 changed files with 790 additions and 0 deletions

View File

@ -0,0 +1,18 @@
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
ext {
appName = 'Tachiyomi: My Manga Reader CMS (Many sources)'
pkgNameSuffix = 'all.mmrcms'
extClass = '.MyMangaReaderCMSSources'
extVersionCode = 1
extVersionSuffix = 1
libVersion = '1.2'
}
dependencies {
provided "com.google.code.gson:gson:2.8.1"
provided "com.github.salomonbrys.kotson:kotson:2.5.0"
}
apply from: "$rootDir/common.gradle"

346
src/all/mmrcms/genSources.sh Executable file
View File

@ -0,0 +1,346 @@
#!/usr/bin/env bash
echo "My Manga Reader CMS source generator by: nulldev"
# CMS: https://getcyberworks.com/product/manga-reader-cms/
# Print a message out to stderr
function echoErr() {
echo "ERROR: $@" >&2
}
# Require that a command exists before continuing
function require() {
command -v $1 >/dev/null 2>&1 || { echoErr "This script requires $1 but it's not installed."; exit 1; }
}
# Define commands that this script depends on
require xmllint
require jq
require perl
require wget
require curl
require grep
require sed
# Show help/usage info
function printHelp() {
echo "Usage: ./genSources.sh [options]"
echo ""
echo "Options:"
echo "--help: Show this help page"
echo "--dry-run: Perform a dry run (make no changes)"
echo "--list: List currently available sources"
echo "--out <file>: Explicitly specify output file"
}
# Target file
TARGET="src/eu/kanade/tachiyomi/extension/all/mmrcms/GeneratedSources.kt"
# String containing processed URLs (used to detect duplicate URLs)
PROCESSED=""
# Parse CLI args
while [ $# -gt 0 ]
do
case "$1" in
--help)
printHelp
exit 0
;;
--dry-run) OPT_DRY_RUN=true
;;
--list)
OPT_DRY_RUN=true
OPT_LIST=true
;;
--out)
TARGET="$2"
shift
;;
--*)
echo "Invalid option $1!"
printHelp
exit -1
;;
*)
echo "Invalid argument $1!"
printHelp
exit -1
;;
esac
shift
done
# Change target if performing dry run
if [ "$OPT_DRY_RUN" = true ] ; then
# Do not warn if dry running because of list
if ! [ "$OPT_LIST" = true ] ; then
echo "Performing a dry run, no changes will be made!"
fi
TARGET="/dev/null"
else
# Delete old sources
rm "$TARGET"
fi
# Variable used to store output while processing
QUEUED_SOURCES="["
# lang, name, baseUrl
function gen() {
PROCESSED="$PROCESSED$3\n"
if [ "$OPT_LIST" = true ] ; then
echo "- $(echo "$1" | awk '{print toupper($0)}'): $2"
else
echo "Generating source: $2"
QUEUED_SOURCES="$QUEUED_SOURCES"$'\n'"$(genSource "$1" "$2" "$3")"
# genSource runs in a subprocess, so we check for bad exit code and exit current process if necessary
[ $? -ne 0 ] && exit -1;
fi
}
# Find and get the item URL from an HTML page
function getItemUrl() {
grep -oP "(?<=showURL = \")(.*)(?=SELECTION)" "$1"
}
# Strip all scripts and Cloudflare email protection from page
# We strip Cloudflare email protection as titles like 'IDOLM@STER' can trigger it and break the parser
function stripScripts() {
perl -0pe 's/<script.*?>[\s\S]*?< *?\/ *?script *?>//g' |\
perl -0pe 's/<span class="__cf_email__".*?>[\s\S]*?< *?\/ *?span *?>/???@???/g'
}
# Verify that a response is valid
function verifyResponse() {
[ "${1##*$'\n'}" -eq "200" ] && [[ "$1" != *"Whoops, looks like something went wrong"* ]]
}
# Get the available tags from the manga list page
function parseTagsFromMangaList() {
xmllint --xpath "//div[contains(@class, 'tag-links')]//a" --html "$1" 2>/dev/null |\
sed 's/<\/a>/"},\n/g; s/">/", "name": "/g;' |\
perl -pe 's/<a.*?\/tag\// {"id": "/gi;' |\
sed '/^</d'
}
# Get the available categories from the manga list page
function parseCategoriesFromMangaList() {
xmllint --xpath "//li//a[contains(@class, 'category')]" --html "$1" 2>/dev/null |\
sed 's/<\/a>/"},\n/g; s/" class="category">/", "name": "/g;' |\
perl -pe 's/<a.*?\?cat=/ {"id": "/gi;'
}
# Get the available categories from the advanced search page
function parseCategoriesFromAdvancedSearch() {
xmllint --xpath "//select[@name='categories[]']/option" --html "$1" 2>/dev/null |\
sed 's/<\/option>/"},\n/g; s/<option value="/ {"id": "/g; s/">/", "name": "/g;'
}
# Unescape HTML entities
function unescapeHtml() {
echo "$1" | perl -C -MHTML::Entities -pe 'decode_entities($_);'
}
# Remove the last character from a string, often used to remove the trailing comma
function stripLastComma() {
echo "${1::-1}"
}
# lang, name, baseUrl
function genSource() {
# Allocate temp files
DL_TMP="$(mktemp)"
PG_TMP="$(mktemp)"
# Fetch categories from advanced search
wget "$3/advanced-search" -O "$DL_TMP"
# Find manga/comic URL
ITEM_URL="$(getItemUrl "$DL_TMP")"
# Remove scripts
cat "$DL_TMP" | stripScripts > "$PG_TMP"
# Find and transform categories
CATEGORIES="$(parseCategoriesFromAdvancedSearch "$PG_TMP")"
# Get item url from home page if not on advanced search page!
if [[ -z "${ITEM_URL// }" ]]; then
# Download home page
wget "$3" -O "$DL_TMP"
# Extract item url again
ITEM_URL="$(getItemUrl "$DL_TMP")"
# Still missing?
if [[ -z "${ITEM_URL// }" ]]; then
echoErr "Could not get item URL!"
exit -1
fi
fi
# Calculate location of manga list page
LIST_URL_PREFIX="manga"
# Get last path item in item URL and set as URL prefix
if [[ $ITEM_URL =~ .*\/([^\\]+)\/ ]]; then
LIST_URL_PREFIX="${BASH_REMATCH[1]}"
fi
# Download manga list page
wget "$3/$LIST_URL_PREFIX-list" -O "$DL_TMP"
# Remove scripts
cat "$DL_TMP" | stripScripts > "$PG_TMP"
# Get categories from manga list page if we couldn't from advanced search
if [[ -z "${CATEGORIES// }" ]]; then
# Parse
CATEGORIES="$(parseCategoriesFromMangaList "$PG_TMP")"
# Check again
if [[ -z "${CATEGORIES// }" ]]; then
echoErr "Could not get categories!"
exit -1
fi
fi
# Get tags from manga list page
TAGS="$(parseTagsFromMangaList "$PG_TMP")"
if [[ -z "${TAGS// }" ]]; then
TAGS="null"
else
TAGS="$(stripLastComma "$TAGS")"
TAGS=$'[\n'"$TAGS"$'\n ]'
fi
# Unescape HTML entities
CATEGORIES="$(unescapeHtml "$CATEGORIES")"
# Check if latest manga is supported
LATEST_RESP=$(curl --write-out \\n%{http_code} --silent --output - "$3/filterList?page=1&sortBy=last_release&asc=false")
SUPPORTS_LATEST="false"
if verifyResponse "$LATEST_RESP"; then
SUPPORTS_LATEST="true"
fi
# Remove leftover html pages
rm "$DL_TMP"
rm "$PG_TMP"
# Cleanup categories
CATEGORIES="$(stripLastComma "$CATEGORIES")"
echo " {"
echo " \"language\": \"$1\","
echo " \"name\": \"$2\","
echo " \"base_url\": \"$3\","
echo " \"supports_latest\": $SUPPORTS_LATEST,"
echo " \"item_url\": \"$ITEM_URL\","
echo " \"categories\": ["
echo "$CATEGORIES"
echo " ],"
echo " \"tags\": $TAGS"
echo " },"
}
# Source list
gen "ar" "مانجا اون لاين" "http://www.on-manga.com"
gen "ar" "Manga FYI" "http://mangafyi.com/manga/arabic"
gen "en" "Read Comics Online" "http://readcomics.website"
gen "en" "Fallen Angels Scans" "http://manga.fascans.com"
# Went offline
# gen "en" "MangaRoot" "http://mangaroot.com"
gen "en" "Mangawww Reader" "http://mangawww.com"
gen "en" "MangaForLife" "http://manga4ever.com"
gen "en" "Manga Spoil" "http://mangaspoil.com"
# Protected by CloudFlare
# gen "en" "MangaBlue" "http://mangablue.com"
# Some sort of anti-bot system
# gen "en" "Manga Forest" "https://mangaforest.com"
gen "en" "DManga" "http://dmanga.website"
gen "en" "Chibi Manga Reader" "http://www.cmreader.info"
gen "en" "ZXComic" "http://zxcomic.com"
# Went offline
# gen "en" "DB Manga" "http://dbmanga.com"
gen "en" "Mangacox" "http://mangacox.com"
# Protected by CloudFlare
# gen "en" "GO Manhwa" "http://gomanhwa.xyz"
# Went offline
# gen "en" "KoManga" "https://komanga.net"
gen "en" "Manganimecan" "http://manganimecan.com"
gen "en" "Hentai2Manga" "http://hentai2manga.com"
gen "en" "White Cloud Pavilion" "http://www.whitecloudpavilion.com/manga/free"
gen "en" "4 Manga" "http://4-manga.com"
gen "en" "XYXX.INFO" "http://xyxx.info"
gen "es" "My-mangas.com" "https://my-mangas.com"
gen "es" "SOS Scanlation" "https://sosscanlation.com"
# Went offline
# gen "fa" "TrinityReader" "http://trinityreader.pw"
gen "fr" "Manga-LEL" "https://www.manga-lel.com"
gen "fr" "Manga Etonnia" "https://www.etonnia.com"
gen "fr" "Scan FR" "http://www.scan-fr.net"
gen "fr" "ScanFR.com" "http://scanfr.com"
gen "fr" "Manga FYI" "http://mangafyi.com/manga/french"
gen "fr" "Mugiwara" "http://mugiwara.be"
gen "fr" "scans-manga" "http://scans-manga.com"
# Went offline
# gen "fr" "Tous Vos Scans" "http://www.tous-vos-scans.com"
gen "id" "Manga Desu" "http://mangadesu.net"
# Went offline
# gen "id" "Komik Mangafire.ID" "http://go.mangafire.id"
gen "id" "MangaOnline" "http://mangaonline.web.id"
# Went offline
# gen "id" "MangaNesia" "https://manganesia.com"
gen "id" "Komikid" "http://www.komikid.com"
gen "id" "MangaID" "http://mangaid.co"
gen "id" "Manga Seru" "http://www.mangaseru.top"
gen "id" "Manga FYI" "http://mangafyi.com/manga/indonesian"
# Went offline
# gen "id" "Indo Manga Reader" "http://indomangareader.com"
# Some sort of anti-bot system
# gen "it" "Kingdom Italia Reader" "http://kireader.altervista.org"
# Went offline
# gen "ja" "IchigoBook" "http://ichigobook.com"
# Went offline
# gen "ja" "Mangaraw Online" "http://mangaraw.online"
gen "ja" "Mangazuki RAWS" "https://raws.mangazuki.co"
gen "ja" "MangaRAW" "https://www.mgraw.com"
gen "pl" "Candy Scans" "http://csreader.webd.pl"
gen "pl" "ToraScans" "http://torascans.pl"
gen "pt" "Comic Space" "https://www.comicspace.com.br"
gen "pt" "Mangás Yuri" "https://mangasyuri.net"
gen "ru" "NAKAMA" "http://nakama.ru"
# Went offline
# gen "ru" "AkaiYuhiMun team" "https://akaiyuhimun.ru/reader"
gen "tr" "MangAoi" "http://mangaoi.com"
gen "tr" "MangaHanta" "http://mangahanta.com"
gen "tr" "ManhuaTR" "http://www.manhua-tr.com"
gen "vi" "Fallen Angels Scans" "http://truyen.fascans.com"
# Blocks bots (like this one)
# gen "tr" "Epikmanga" "http://www.epikmanga.com"
# NOTE: THIS SOURCE CONTAINS A CUSTOM LANGUAGE SYSTEM (which will be ignored)!
gen "other" "HentaiShark" "http://www.hentaishark.com"
if ! [ "$OPT_LIST" = true ] ; then
# Remove last comma from output
QUEUED_SOURCES="$(stripLastComma "$QUEUED_SOURCES")"
# Format, minify and split JSON output into chunks of 5000 chars
OUTPUT="$(echo -e "$QUEUED_SOURCES\n]" | jq -c . | fold -s -w5000)"
# Write file header
echo -e "package eu.kanade.tachiyomi.extension.all.mmrcms\n" >> "$TARGET"
echo -e "// GENERATED FILE, DO NOT MODIFY!" >> "$TARGET"
echo -e "// Generated on $(date)\n" >> "$TARGET"
# Convert split lines into variables
COUNTER=0
CONCAT="val SOURCES: String get() = "
TOTAL_LINES="$(echo "$OUTPUT" | wc -l)"
while read -r line; do
COUNTER=$[$COUNTER +1]
VARNAME="MMRSOURCE_$COUNTER"
echo "private val $VARNAME = \"\"\"$line\"\"\"" >> "$TARGET"
CONCAT="$CONCAT$VARNAME"
if [ "$COUNTER" -ne "$TOTAL_LINES" ]; then
CONCAT="$CONCAT + "
fi
done <<< "$OUTPUT"
echo "$CONCAT" >> "$TARGET"
fi
# Detect and warn about duplicate sources
DUPES="$(echo -e "$PROCESSED" | sort | uniq -d)"
if [[ ! -z "$DUPES" ]]; then
echo
echo "----> WARNING, DUPLICATE SOURCES DETECTED! <----"
echo "Listing duplicates:"
echo "$DUPES"
echo
fi
echo "Done!"

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,286 @@
package eu.kanade.tachiyomi.extension.all.mmrcms
import android.net.Uri
import com.github.salomonbrys.kotson.array
import com.github.salomonbrys.kotson.get
import com.github.salomonbrys.kotson.string
import com.google.gson.JsonParser
import eu.kanade.tachiyomi.network.GET
import eu.kanade.tachiyomi.source.model.*
import eu.kanade.tachiyomi.source.online.HttpSource
import eu.kanade.tachiyomi.util.asJsoup
import okhttp3.Request
import okhttp3.Response
import org.jsoup.nodes.Element
import java.text.ParseException
import java.text.SimpleDateFormat
import java.util.*
class MyMangaReaderCMSSource(override val lang: String,
override val name: String,
override val baseUrl: String,
override val supportsLatest: Boolean,
private val itemUrl: String,
private val categoryMappings: List<Pair<String, String>>,
private val tagMappings: List<Pair<String, String>>?) : HttpSource() {
private val jsonParser = JsonParser()
private val itemUrlPath = Uri.parse(itemUrl).pathSegments.first()
override fun popularMangaRequest(page: Int) = GET("$baseUrl/filterList?page=$page&sortBy=views&asc=false")
override fun searchMangaRequest(page: Int, query: String, filters: FilterList): Request {
//Query overrides everything
val url: Uri.Builder
if(query.isNotBlank()) {
url = Uri.parse("$baseUrl/search")!!.buildUpon()
url.appendQueryParameter("query", query)
} else {
url = Uri.parse("$baseUrl/filterList?page=$page")!!.buildUpon()
filters.filterIsInstance<UriFilter>()
.forEach { it.addToUri(url) }
}
return GET(url.toString())
}
override fun latestUpdatesRequest(page: Int) = GET("$baseUrl/filterList?page=$page&sortBy=last_release&asc=false")
override fun popularMangaParse(response: Response) = internalMangaParse(response)
override fun searchMangaParse(response: Response): MangasPage {
return if(response.request().url().queryParameter("query")?.isNotBlank() == true) {
//If a search query was specified, use search instead!
MangasPage(jsonParser
.parse(response.body()!!.string())["suggestions"].array
.map {
SManga.create().apply {
val segment = it["data"].string
setUrlWithoutDomain(itemUrl + segment)
title = it["value"].string
// Guess thumbnails
thumbnail_url = "$baseUrl/uploads/manga/$segment/cover/cover_250x350.jpg"
}
}, false)
} else {
internalMangaParse(response)
}
}
override fun latestUpdatesParse(response: Response) = internalMangaParse(response)
private fun internalMangaParse(response: Response): MangasPage {
val document = response.asJsoup()
return MangasPage(document.getElementsByClass("col-sm-6").map {
SManga.create().apply {
val urlElement = it.getElementsByClass("chart-title")
setUrlWithoutDomain(urlElement.attr("href"))
title = urlElement.text().trim()
thumbnail_url = it.select(".media-left img").attr("src")
// Guess thumbnails on broken websites
if (thumbnail_url?.isBlank() != false || thumbnail_url?.endsWith("no-image.png") != false) {
thumbnail_url = "$baseUrl/uploads/manga/${url.substringAfterLast('/')}/cover/cover_250x350.jpg"
}
}
}, document.select(".pagination a[rel=next]").isNotEmpty())
}
override fun mangaDetailsParse(response:Response) = SManga.create().apply {
val document = response.asJsoup()
title = document.getElementsByClass("widget-title").text().trim()
thumbnail_url = document.select(".row .img-responsive").attr("src")
description = document.select(".row .well p").text().trim()
var cur: String? = null
for(element in document.select(".row .dl-horizontal").select("dt,dd")) {
when(element.tagName()) {
"dt" -> cur = element.text().trim().toLowerCase()
"dd" -> when(cur) {
"author(s)",
"autor(es)",
"auteur(s)",
"著作",
"yazar(lar)",
"mangaka(lar)",
"pengarang/penulis",
"pengarang",
"penulis",
"autor",
"المؤلف",
"перевод" -> author = element.text()
"artist(s)",
"artiste(s)",
"sanatçi(lar)",
"artista(s)",
"artist(s)/ilustrator",
"الرسام",
"seniman" -> artist = element.text()
"categories",
"categorías",
"catégories",
"ジャンル",
"kategoriler",
"categorias",
"kategorie",
"التصنيفات",
"жанр",
"kategori" -> genre = element.getElementsByTag("a").joinToString {
it.text().trim()
}
"status",
"statut",
"estado",
"状態",
"durum",
"الحالة",
"статус" -> status = when(element.text().trim().toLowerCase()) {
"complete",
"مكتملة",
"complet" -> SManga.COMPLETED
"ongoing",
"مستمرة",
"en cours" -> SManga.ONGOING
else -> SManga.UNKNOWN
}
}
}
}
}
/**
* Parses the response from the site and returns a list of chapters.
*
* Overriden to allow for null chapters
*
* @param response the response from the site.
*/
override fun chapterListParse(response: Response): List<SChapter> {
val document = response.asJsoup()
return document.select(chapterListSelector()).mapNotNull { nullableChapterFromElement(it) }
}
/**
* Returns the Jsoup selector that returns a list of [Element] corresponding to each chapter.
*/
fun chapterListSelector() = ".chapters > li:not(.btn)"
/**
* Returns a chapter from the given element.
*
* @param element an element obtained from [chapterListSelector].
*/
private fun nullableChapterFromElement(element: Element): SChapter? {
val titleWrapper = element.getElementsByClass("chapter-title-rtl").first()
val url = titleWrapper.getElementsByTag("a").attr("href")
// Ensure chapter actually links to a manga
// Some websites use the chapters box to link to post announcements
if (!Uri.parse(url).pathSegments.firstOrNull().equals(itemUrlPath, true)) {
return null
}
val chapter = SChapter.create()
chapter.setUrlWithoutDomain(url)
chapter.name = titleWrapper.text()
// Parse date
val dateText = element.getElementsByClass("date-chapter-title-rtl").text().trim()
val formattedDate = try {
DATE_FORMAT.parse(dateText).time
} catch (e: ParseException) {
0L
}
chapter.date_upload = formattedDate
return chapter
}
override fun pageListParse(response: Response)
= response.asJsoup().select("#all > .img-responsive")
.mapIndexed { i, e ->
val url = e.attr("data-src").trim()
Page(i, url, url)
}
override fun imageUrlParse(response: Response)
= throw UnsupportedOperationException("Unused method called!")
private fun getInitialFilterList() = listOf<Filter<*>>(
Filter.Header("NOTE: Ignored if using text search!"),
Filter.Separator(),
AuthorFilter(),
UriSelectFilter("Category",
"cat",
arrayOf("" to "Any",
*categoryMappings.toTypedArray()
)
),
UriSelectFilter("Begins with",
"alpha",
arrayOf("" to "Any",
*"#ABCDEFGHIJKLMNOPQRSTUVWXYZ".toCharArray().map {
Pair(it.toString(), it.toString())
}.toTypedArray()
)
),
UriSelectFilter("Sort by",
"sortBy",
arrayOf(
"name" to "Name",
"views" to "Popularity",
"last_release" to "Last update"
), false),
UriSelectFilter("Sort direction",
"asc",
arrayOf(
"true" to "Ascending",
"false" to "Descending"
), false)
)
/**
* Returns the list of filters for the source.
*/
override fun getFilterList() = FilterList(
if(tagMappings != null)
(getInitialFilterList() + UriSelectFilter("Tag",
"tag",
arrayOf("" to "Any",
*tagMappings.toTypedArray()
)))
else getInitialFilterList()
)
/**
* Class that creates a select filter. Each entry in the dropdown has a name and a display name.
* If an entry is selected it is appended as a query parameter onto the end of the URI.
* If `firstIsUnspecified` is set to true, if the first entry is selected, nothing will be appended on the the URI.
*/
//vals: <name, display>
open class UriSelectFilter(displayName: String, val uriParam: String, val vals: Array<Pair<String, String>>,
val firstIsUnspecified: Boolean = true,
defaultValue: Int = 0) :
Filter.Select<String>(displayName, vals.map { it.second }.toTypedArray(), defaultValue), UriFilter {
override fun addToUri(uri: Uri.Builder) {
if (state != 0 || !firstIsUnspecified)
uri.appendQueryParameter(uriParam, vals[state].first)
}
}
class AuthorFilter: Filter.Text("Author"), UriFilter {
override fun addToUri(uri: Uri.Builder) {
uri.appendQueryParameter("author", state)
}
}
/**
* Represents a filter that is able to modify a URI.
*/
interface UriFilter {
fun addToUri(uri: Uri.Builder)
}
companion object {
private val DATE_FORMAT = SimpleDateFormat("d MMM. yyyy", Locale.US)
}
}

View File

@ -0,0 +1,93 @@
package eu.kanade.tachiyomi.extension.all.mmrcms
import com.github.salomonbrys.kotson.array
import com.github.salomonbrys.kotson.bool
import com.github.salomonbrys.kotson.nullArray
import com.github.salomonbrys.kotson.string
import com.google.gson.JsonArray
import com.google.gson.JsonObject
import com.google.gson.JsonParser
import eu.kanade.tachiyomi.source.SourceFactory
class MyMangaReaderCMSSources: SourceFactory {
/**
* Create a new copy of the sources
* @return The created sources
*/
override fun createSources() = parseSources(SOURCES)
/**
* Parse a JSON array of sources into a list of `MyMangaReaderCMSSource`s
*
* Example JSON array:
* ```
* [
* {
* "language": "en",
* "name": "Example manga reader",
* "base_url": "http://example.com",
* "supports_latest": true,
* "item_url": "http://example.com/manga/",
* "categories": [
* {"id": "stuff", "name": "Stuff"},
* {"id": "test", "name": "Test"}
* ],
* "tags": [
* {"id": "action", "name": "Action"},
* {"id": "adventure", "name": "Adventure"}
* ]
* }
* ]
* ```
*
* Sources that do not supports tags may use `null` instead of a list of json objects
*
* @param sourceString The JSON array of sources to parse
* @return The list of parsed sources
*/
private fun parseSources(sourceString: String): List<MyMangaReaderCMSSource> {
val parser = JsonParser()
val array = parser.parse(sourceString).array
return array.map {
it as JsonObject
val language = it["language"].string
val name = it["name"].string
val baseUrl = it["base_url"].string
val supportsLatest = it["supports_latest"].bool
val itemUrl = it["item_url"].string
val categories = mapToPairs(it["categories"].array)
val tags = it["tags"].nullArray?.let { mapToPairs(it) }
MyMangaReaderCMSSource(
language,
name,
baseUrl,
supportsLatest,
itemUrl,
categories,
tags
)
}
}
/**
* Map an array of JSON objects to pairs. Each JSON object must have
* the following properties:
*
* id: first item in pair
* name: second item in pair
*
* @param array The array to process
* @return The new list of pairs
*/
private fun mapToPairs(array: JsonArray): List<Pair<String, String>>
= array.map {
it as JsonObject
it["id"].string to it["name"].string
}
}