Page 2 of 46 FirstFirst 123456712 ... LastLast
Results 21 to 40 of 917
Like Tree43Likes
  1.    #21  
    Quote Originally Posted by jbusnengo View Post
    Hmmm... Internalz shows no such folder or file, nor can I see one when it's connected as a USB drive. Can you think of any particular reason why pReader wouldn't be able to create it? Thanks!
    I don't know, I never had it happen to me. Try creating it yourself in Internalz and see what happens.
  2. #22  
    Quote Originally Posted by Jappus View Post
    I don't know, I never had it happen to me. Try creating it yourself in Internalz and see what happens.
    I had no problem creating the folder, but pReader still just gives me a blank card when I launch it.
  3.    #23  
    Quote Originally Posted by ncinerate View Post
    Loaded a few smaller epub files and they work perfectly (and import very very quickly - awesome work).

    Unfortunately, I tried to import a larger epub (38.85 megabytes) and it didn't work. Specifically, it says "importing 1 file(s)" and the screen goes gray, then it comes back to the library but the book isn't imported and in the library. If I attempt to add it again, the phone crashes and reboots.
    Actually, that sounds like the back-end simply crashed on importing the file. This will immediately abort the import process, leave an empty folder (just like you've described) and then crash the whole phone as soon as the app tries to access the crashed back-end again (for example when you close the app or try to add a new file).

    Anyway, the most likely reason for the crash is that the ePub file contains a "table" or "ol" (numbered list) tag, which obviously has nothing to do with the size of the file. Anyway, I've fixed that bug yesterday and as soon as I've fixed the pesky encoding bug, I'll release a v0.9.1 that'll hopefully fix your problem.

    My current estimate is that I'll be done with the encoding bug tomorrow.
  4.    #24  
    Quote Originally Posted by jbusnengo View Post
    I had no problem creating the folder, but pReader still just gives me a blank card when I launch it.
    Mhhhm, what an increasingly strange bug.

    Could you please install the IPK-file that I've sent you via PM?

    It's a snapshot of my internal development version which has the back-end logging turned on. This should create a "pReaderNative.log" file upon program start, which tracks the actions of the back-end. Do note that it's still v0.9.0, so you'll have to uninstall the app before you can actually install this snapshot.

    If you could send me the content of that file, that'd be great.

    Thanks!
  5. #25  
    Quote Originally Posted by Jappus View Post
    Mhhhm, what an increasingly strange bug.

    Could you please install the IPK-file that I've sent you via PM?

    It's a snapshot of my internal development version which has the back-end logging turned on. This should create a "pReaderNative.log" file upon program start, which tracks the actions of the back-end. Do note that it's still v0.9.0, so you'll have to uninstall the app before you can actually install this snapshot.

    If you could send me the content of that file, that'd be great.

    Thanks!
    The application started up just fine, once I installed your IPK. It's always frustrating when bugs go away on their own... You never know when they'll come back. Do you still want the log file?
  6. #26  
    Quote Originally Posted by Jappus View Post
    Actually, that sounds like the back-end simply crashed on importing the file....

    Anyway, I've fixed that bug yesterday and as soon as I've fixed the pesky encoding bug, I'll release a v0.9.1 that'll hopefully fix your problem.

    My current estimate is that I'll be done with the encoding bug tomorrow.
    Awesome, soon as I have the new version in-hand I'll let you know if it sorted out the problem. Thanks again for the great program.
  7. #27  
    Quote Originally Posted by Jappus View Post
    Could you open a bug report on the Sourceforge Tracker I linked to in the OP? It allows you to attach test-files and allows me to track the number of outstanding bugs.
    A test-file would be really great.
    I opened bug #3177025 but I can't upload a file larger that 256kb. The ebook is 1MB, so I put a link that can be used to download the book in the sourceforge entry.
  8.    #28  
    Good news, everyone!

    I've just released v0.9.1 of the native alpha. It fixes all of the outstanding bugs that were reported by you guys. In particular:

    • Fixed bug that caused the app to crash during import of ePUB files that contained certain tags.
    • Fixed character set encoding. Now, encodings other than CP-1251 and UTF-8 should work, too.
    • If you change the encoding, the page is immediately refreshed, so that you can see the changes.


    I've uploaded the new version to Sourceforge and PreCentral, so you can either download and manually install it, or simply wait a while till it appears on the PreCentral feed.


    Have fun, and as always, please report any bugs that you can find.
  9. #29  
    Well, I decided to be bold and try out this puppy, so I decided to bulk import 200 books. Guess what? It worked! And it worked fast - 200 books taking about as much time as 2 on the old version. So jappus' boast of 100x faster is no exaggeration - it flies.

    That said, I noticed something that might be an area for optimization. During the import, I noticed that it was updating the Library window. I don't know if that's a necessary thing or even if it steals much time. Just wanted to mention it.

    On the matter of importing, is it still making a copy of the book somewhere and the original can be deleted?

    Thanks for the great work!
  10. #30  
    Confirmed! The new update fixed the problem - the 38 megabyte ebook imported flawlessly (and QUICKLY). Whatever you did fixed the problem.

    Amazing work Jappus, and greatly appreciated!
  11.    #31  
    Quote Originally Posted by govotsos View Post
    That said, I noticed something that might be an area for optimization. During the import, I noticed that it was updating the Library window. I don't know if that's a necessary thing or even if it steals much time. Just wanted to mention it.
    Well, for one, it shouldn't steal that much time and doing it that way freed me from detecting when the import process crashed. Eventually, I'll delay the library refresh until after every file is imported. It's really just a minor alteration in the front-end, after all.

    But in the meantime, I decided to spent what little free development time I have on other parts of the pReader. But thanks anyway for reminding me of this issue.


    On the matter of importing, is it still making a copy of the book somewhere and the original can be deleted?
    Yup, I briefly thought about using the original files directly and just creating a suitable index for them to speed up access. And if I had 20 fellow programmers working on the pReader I'd have probably done it this way.

    But for just one developer, such an approach is infeasible, because in that case each format has to have its own indexing type and do things differently to speed up processing for quick page rendering.

    The way I do it now only demands from each importer to dump an HTML file to disk and allow access to the images, links and predefined metadata. That's much easier, because I only need to optimize the reading performance of one storage format -- the internal one.


    To make a long story short: Yes, the pReader copies the book into its own HTML+JSON format and you can delete the original file after import. And this time, thanks to using a file-based API, if you ever drop a book from your library, it actually gets deleted. Something that was always an issue with the WebOS SQLite interface, which simply refused to properly clean up its data tables on a delete.
  12. #32  
    Quote Originally Posted by Jappus View Post
    Good news, everyone!

    I've just released v0.9.1 of the native alpha. It fixes all of the outstanding bugs that were reported by you guys. In particular:

    • Fixed bug that caused the app to crash during import of ePUB files that contained certain tags.
    • Fixed character set encoding. Now, encodings other than CP-1251 and UTF-8 should work, too.
    • If you change the encoding, the page is immediately refreshed, so that you can see the changes.


    I've uploaded the new version to Sourceforge and PreCentral, so you can either download and manually install it, or simply wait a while till it appears on the PreCentral feed.


    Have fun, and as always, please report any bugs that you can find.


    nope cp1250 till doesnt show 100% its much better then before, but still not perfect.

    it doesnt display ř, č, , Ř, and so on...
  13.    #33  
    Quote Originally Posted by Walhalla2k View Post
    nope cp1250 till doesnt show 100% its much better then before, but still not perfect.

    it doesnt display ř, č, ž, Ř, Ž and so on...
    You need to re-import the book for it to work, since I had to change the way the HTML files are stored for it to preserve the high-order bytes. I've attached a screenshot of my Pre displaying the chars you mentioned just fine.
    Attached Images Attached Images
  14. #34  
    Quote Originally Posted by Jappus View Post
    You need to re-import the book for it to work, since I had to change the way the HTML files are stored for it to preserve the high-order bytes. I've attached a screenshot of my Pre displaying the chars you mentioned just fine.
    Hey! How is it possible?... Ok.. of course I did reimport the books. This was the first thing I tried... Then it changed from really wrong to better... let me uninstall it, reinstall it, reboot and reimport it :-)
  15. #35  
    EDIT: Ok, I found it...

    I was not aware, that there was also "change encoding" in the drop bar menu direct from the book.

    I changed it only in the preferences menu. And there it was on CP1250 but in the "change encoding" it was set to cp1252... Now both on cp1250 and working like a charm!!
  16.    #36  
    Quote Originally Posted by Walhalla2k View Post
    I was not aware, that there was also "change encoding" in the drop bar menu direct from the book.

    I changed it only in the preferences menu. And there it was on CP1250 but in the "change encoding" it was set to cp1252... Now both on cp1250 and working like a charm!!
    Yeah, the global setting currently does absolutely zilch. I'm in general rather unsatisfied with this separation of the global and local encoding. It was always an issue with people finding one setting, but not the other --- or people not finding either one.

    The only problem is: I never had a sufficiently bright idea how to make this one more ... obvious. I know there has to be a convenient way for users to change both the default encoding for new books, and the local encoding for already imported books. It's just that people keep confusing one for the other, or think that the global option immediately affects all books.


    And people wonder why Microsoft, Apple, Google et al invest millions and billions in GUI-optimization and extensive user-studies...
  17. #37  
    Why has pre|central been sending me 3 duplicate notifications of new posts today? It's only happening on this thread - and yes I checked the subscription settings are fine.
  18. #38  
    Quote Originally Posted by Jappus View Post
    Yeah, the global setting currently does absolutely zilch. I'm in general rather unsatisfied with this separation of the global and local encoding. It was always an issue with people finding one setting, but not the other --- or people not finding either one.

    The only problem is: I never had a sufficiently bright idea how to make this one more ... obvious. I know there has to be a convenient way for users to change both the default encoding for new books, and the local encoding for already imported books. It's just that people keep confusing one for the other, or think that the global option immediately affects all books.
    I have a couple of different suggestions. First, in the preferences, change the text to read, "Default encoding for new books" or something like that.

    In the long run, though, I wonder if it wouldn't be a better idea to attempt to detect the code page upon import. I know there are open source projects out there to do so (i.e. cpdetector). You could also look at the source for Firefox. Detection might ultimately be the most user-friendly option.
  19.    #39  
    Quote Originally Posted by jbusnengo View Post
    In the long run, though, I wonder if it wouldn't be a better idea to attempt to detect the code page upon import. I know there are open source projects out there to do so (i.e. cpdetector). You could also look at the source for Firefox. Detection might ultimately be the most user-friendly option.
    Yeah, I looked at automatic detection, but basically, it's a huge round of russian roulette. You can tell the difference between UTF-8, UTF-16/UCS-2 and UTF-32/UCS-4 ... but that's basically it. Detection of all the 8-bit encodings sooner or later devolves into a game of parsing the file for bigrams and trigrams and then selecting the charset according to a fuzzy selection --- fuzzy because you'll usually have to select between a number of matching encodings.

    And that doesn't even take into account things like GBK or Shift-JIS, where you can't use bi-/trigrams and simply have to wing it. Or cases where one and the same encoding is used for wildly different languages, like CP-1250.


    And as for Firefox (or any other browser), their approaches are just as much of a hack. Basically, they initially try to use the HTTP character-set definition. If it isn't there, they start to parse all files in CP-1252, UTF-8 or UTF-16; depending on what they encounter at the start of the file. Afterwards, they search for an XML/HTML codepage definition.

    But if that's not there, they simply default to the codepage that was set by the OS or by the user. This usually boils down to CP-1252 (Windows and old Linux), UTF-8 (modern Linux) or MacRoman (Mac OS, duh).



    Basically, you have the choice between guessing, detection and just letting the user decide. With guessing, you won't expend much time, but be wrong in > 50% of the cases when the user does NOT use your default encoding. With detection, you'll spend a lot of computing time ... and only improve your rate to maybe 60-70%.


    In the end, you have to allow the user to decide manually anyway. So while it mitigates the problems of a proper user-interface, it doesn't actually solve it.
  20. #40  
    Well, some more fiddling with Preader and noticing a few things.

    That big ebook I imported imported fine, but reading it is proving challenging. Moving forward in pages is creating some really odd issues, including text being drawn over the top of other existing text. If I attempt to move forward in the book (using the percentage slider) it moves to the point I want it to, but still displays 0% read at the bottom and the second I attempt to turn the page I'm back at the beginning of the book. Finally, the little "bookmark" button shows all the chapters etc, but wont actually let me navigate to them.

    Anyway, just thought I'd drop this small report, hope it helps!
Page 2 of 46 FirstFirst 123456712 ... LastLast

Posting Permissions