diff options
author | Zhiming Wang <zmwangx@gmail.com> | 2015-05-04 14:55:10 -0700 |
---|---|---|
committer | Zhiming Wang <zmwangx@gmail.com> | 2015-05-04 14:55:10 -0700 |
commit | 301679861a2440a10c9eac746cec86459f445ef9 (patch) | |
tree | 5aad22ead01cf0da226623f603f33867896c0fea /source/blog | |
parent | d0a07c64afba47bbe8bfb56ba9893296a73fc7db (diff) | |
download | my_new_personal_website-301679861a2440a10c9eac746cec86459f445ef9.tar.xz my_new_personal_website-301679861a2440a10c9eac746cec86459f445ef9.zip |
remove all Octopress stuff
Diffstat (limited to 'source/blog')
49 files changed, 1399 insertions, 18 deletions
diff --git a/source/blog/2014-10-20-hello-octopress.md b/source/blog/2014-10-20-hello-octopress.md new file mode 100644 index 00000000..d359f401 --- /dev/null +++ b/source/blog/2014-10-20-hello-octopress.md @@ -0,0 +1,35 @@ +--- +layout: post +title: "Hello, Octopress!" +date: 2014-10-20 16:53:00 -0700 +comments: true +categories: +--- +This post marks my transition from Tumblr to Octopress & GitHub Pages. + +I've been microblogging for a while at [zshello3.tumblr.com](http://zshello3.tumblr.com) and I liked it. Not because of readers, which I suspect I have none; but there are certainly a huge amount of information I want to dump off my mind. Back in the days I *loved* my handwritten journals, and I really poured a lot into them. Peers were usually amazed when they saw my journals. These days typing seems to be a more robust solution to keeping my thoughts, especially when the thoughts are mostly technical. + +Tumblr is awesome. Compared to WordPress (.com for pedantic people), it is both lightweight and beautiful (you get all the customization for free), so you quickly get to the writing. However, it is not designed for geeks, so + +* Customization is capped at some point; +* I had the impression that the Markdown parser is herrendous; +* Email publishing is a mightmare, by the way; +* Code rendering always falls short — unless I spend a lot of time customizing (I'm pretty bad at HTML, CSS, and JS stuff). + +Speaking of the last point, I've always been envious of the beautiful code blocks found on Octopress blogs. So here I come! + +(Let me give it a try first.) + +``` C hello.c +#include <stdio.h> + +int main(int argc, char **argv) { + printf("Hello, Octopress!\n") +} +``` + +Gorgeous. I'll get to the theme customization later. I'm actually busy as crazy this week. + +Before I close this post, let me also try to embed a random gist I authored yesterday (for brewing): + +{% gist 828fd00bdecd6611cf40 brew.sh %} diff --git a/source/blog/2014-10-20-help-mou-hit-1-dot-0.md b/source/blog/2014-10-20-help-mou-hit-1-dot-0.md new file mode 100644 index 00000000..22587df4 --- /dev/null +++ b/source/blog/2014-10-20-help-mou-hit-1-dot-0.md @@ -0,0 +1,13 @@ +--- +layout: post +title: "Help Mou hit 1.0" +date: 2014-10-20 17:37:45 -0700 +comments: true +categories: +--- +Quick call for [Mou 1.0 fundraiser](https://www.indiegogo.com/projects/mou-1-0-markdown-editor-on-os-x-for-you) on Indiegogo. At the time of writing, it has raised $6,178/$20,000, and has 39 days to go (with 21 already passed). + + +I'm actually writing this post in Mou right now. It's far less powerful than Emacs, but when I want preview-on-the-fly, Mou is the Markdown editor to go. Right now it's far from perfect; for instance, GFM fenced blocks (now included in [CommonMark](http://commonmark.org)) are not supported, so you get nonsense preview when your code block is fenced rather than indented. (Of course, Mou is even less suitable for editing an Octopress post due to the yaml metadata upfront, but that's not a big deal.) + +Let's hope for Mou hitting the 1.0 mark.
\ No newline at end of file diff --git a/source/blog/2014-10-21-get-rolling.md b/source/blog/2014-10-21-get-rolling.md new file mode 100644 index 00000000..bf8f040e --- /dev/null +++ b/source/blog/2014-10-21-get-rolling.md @@ -0,0 +1,35 @@ +--- +layout: post +title: "Get rolling" +date: 2014-10-21 11:40:14 -0700 +comments: true +categories: +--- +Yesterday, on an internet forum, I saw someone’s signature, which translates to + +> Don’t even get started if you know you can't know from the beginning that you won't make it to the very end. + +This seems justified — persistence is the key to success; why even bother if you know you’ll fail? However, I have to profoundly disagree with this, and even dedicate a blog post to discussing this problem. + +The problem here is, the real world is much more complicated than the idealized world found in various quotes about persistence. Most of the time you have absolutely no clue where which road is leading, unless you embark on the journey, clueless. As for me, fortunately I knew I was gonna be a mathematician/physicist ever since elementary school, so the roadmap is sorta clear on the grand scale. Still, all those tiny building blocks for the ultimate goal are confusing, and even more so for things hardly related to the ultimate goal, for instance, coding, which is just a hobby and a way to simplify life — it has simplified my life in some sense, but has itself complicated my life in other ways (your life inevitably becomes more complicated when you know more and want to find out even more). + +And sometimes you are bound to fail — I know my code is shitty, for instance, but if I don’t get rolling, I won’t even have a working (albeit shitty) version that barely meets my needs. As another example, I was looking for a way to share and archive some photos. I started with [WordPress](http://apinkarchive.wordpress.com/), which turned out to be more formal and tiring than I’d thought. Then I ran a [Tumblr microblog](http://chorongmemories.tumblr.com/), which was great, but has certain limitations that in the end prevents it from being damn useful for myself, so in the end I went onto “indefinite hiatus.” In both cases I never declared that the thing was permanent — I was careful to say that the blogs were experimental, and I moved on when I found something better, or when they were no longer helping me. Yesterday I started yet another experiment, a [Tistory blog](http://apinkpcr.tistory.com). It is a random move triggered by something unexpected; the South Korean blogging platform is not that great, but at least it provides acceptable API access, and more importantly, I’ve got complete infrastructure built around the API to scrape photos, so it’s easy to build on top of that to automate things. It is also experimental. Again I’m not sure I long I can keep it up, but at least I’m happy with it for the time being. Being happy at the time being is the most important goal for hobbies. + +Choices are hard. Especially in today’s world, tools in every discipline are constantly improving, be it math, physics, programming, photography, blogging, or whatever. If you research over and over until you find “the perfect tool”, or “the perfect platform” (hint: the rank is constantly changing), you’re stuck on the first step to anywhere. In fact, reading other people’s blogs, for example, are not enough to learn what is the best — you need to at least have some working knowledge to even decide which tool or platform is more suitable for you. Therefore, the most sensible thing to do is to do a little bit of research (combined with your gut feeling), pick up something that makes you feel good at the moment, and immediately get rolling. Well, of course you need some research, otherwise you’re just kidding yourself; doing `>>> import this` in python tells you: + +> Now is better than never.<br> +> Although never is often better than *right* now. + +Then, when you have more experience and know what’s wrong with your original choice, correct it or trash it. There’s nothing wrong with abandoning dated crap, hopping bandwagons, or whatever; that’s the nature of change and improvement, and the most sensible thing for someone as busy as you are. + +Up till now I've been talking about tools and platforms, so maybe it seems that I'm attacking the straw man — you may argue that the original quote is not about what tools or platforms you use, but rather, what you try to accomplish with the tools or platforms. Okay, what about the grander things in life, like mathematical research, like *the final theory*? Well, similar. I’ll quote Ravi here, + +> …mathematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you’ll never get anywhere. Instead, you’ll have tendrils of knowledge extending far from your comfort zone. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning “forwards”. + +Ravi is always hinting at “you should get started with actual research rather than ‘prepare’ yourself”! I can’t reckon that since I’ve yet to follow Ravi’s advice, but the thinking here is crystal clear. **You shouldn’t be afraid of failure; you shouldn’t be afraid of being “not prepared enough”; you shouldn’t be afraid of getting started.** Speaking of a final theory, I’m pretty sure I’m bound to fail, I’m pretty sure I won’t see a satisfactory one in my lifetime — the inconvenient truth is that, I have the gut feeling that the ultimate explanation is unfortunately intertwined with consciousness, and we are still far from having the right tools to understand consciousness. According to Feynman, **really knowing something is hard**. It’s hard, so failure is not shameful at all. Those who won’t even get started due to fear of failing or making the wrong choice won’t fail again, since they’ve already failed at the very beginning. So, get rolling. + +Yesterday I read [Fire and Motion](http://www.joelonsoftware.com/articles/fog0000000339.html) on *Joel on Software*. Joel’s metaphor is really nice, but he’s essentially conveying very similar ideas. + +------------------------------------------------------------------------------- + +By the way, I wrote this post in Emacs. I don’t know why but I seem to type much faster in Emacs than in Mou. (For Markdown editing `markdown-mode`, and `typo-mode`, a minor mode I found today which is useful for inserting smart quotes and smart dashes seamlessly into md articles). diff --git a/source/blog/2014-10-23-ripping-copy-protected-dvd-with-mpv.md b/source/blog/2014-10-23-ripping-copy-protected-dvd-with-mpv.md new file mode 100644 index 00000000..4db51a86 --- /dev/null +++ b/source/blog/2014-10-23-ripping-copy-protected-dvd-with-mpv.md @@ -0,0 +1,38 @@ +--- +layout: post +title: "Ripping copy-protected DVD with mpv" +date: 2014-10-23 20:03:22 -0700 +comments: true +categories: +--- +**_11/02/2014 update:_** + +See [this post](/blog/2014/11/02/vobcopy-dvdbackup-etc/) for issues, explanations, and more. + +--- + +**_10/25/2014 update:_** + +I'm such an idiot. `vobcopy` is the real, hassel-free way to go. + + brew install vobcopy + +Then, with the DVD mounted, + +> **vobcopy** without any options will copy the title with the most chapters into files of 2GB size into the current working directory. + +Of course there are a ton of options, but I generally hate to browse through options unless I have to, so I'm happy with calling without argument. + +--- + +Yesterday I was trying to rip a music video off a newly released DVD from Japan. I knew very little about how DRM (in this case, CSS) actually works and how to break it. I tried to operate directly on the VOB file with `ffmpeg` or `mpv` but both failed with a lot of header errors — I suppose more files than the VOB are required for authentication? Whatever, maybe I’ll learn the details in the future, but I don’t see the need since DVD is an outdated technology anyway. + +So, can we proceed from here? Most certainly. I noticed that although `mpv` won’t let me play a single VOB, I can simply hand it the DVD mount point, and it will play the whole DVD seamlessly. **Caution:** `mpv` needs to be compiled with `libdvdnav` and `libdvdread`! With brew you just do + + brew install mpv --with-libdvdnav --with-libdvdread + +For better performance and backup, I first cloned the DVD into a `.cdr` image (DVD/CD-R Master Image) using Disk Utility (I've never tried creating/cloning image with `diskutil` CLI, so nothing to report on that). Then I mount the image, say the mount point is `/Volumes/UPBX_80165`. As said I can hand that mount point to `mpv` and it simply works, but how about extracting the MPEG-2 video stream? The `--stream-capture=<filename>` option is there just for you. In principle `--stream-dump=<filename>` should also work, but without monitoring the output and controlling where to end, I’m not sure if it will ever terminate itself when reading from a DVD (when I stream captured the DVD it just kept repeating itself until I explicitly quit with `q`). So that's it: + + mpv --stream-capture=dump.mpg /Volumes/UPBX_80165 + +Then you can torture the `dump.mpg` with `ffmpeg` however you want. The most obvious thing is to cut out the music video part, and put into a new container like MPEG-TS. Or transcode it to H.264 for your iPhone. The nice thing about `dump.mpg` is that, unless I got it wrong, there's no quality loss here — the only thing you got rid of is that goddamn DRM. diff --git a/source/blog/2014-10-24-charles-munger-donated-$65m-to-kitp.md b/source/blog/2014-10-24-charles-munger-donated-$65m-to-kitp.md new file mode 100644 index 00000000..9c37b99e --- /dev/null +++ b/source/blog/2014-10-24-charles-munger-donated-$65m-to-kitp.md @@ -0,0 +1,23 @@ +--- +layout: post +title: "Charles Munger donated $65M to KITP" +date: 2014-10-24 16:41:36 -0700 +comments: true +categories: +--- +Today's news has it that Charles Munger made a $65 million donation to KITP at UCSB. See for instance [this article](http://nyti.ms/1D4zg24) on NYT. Of course I didn't learn it from NYT (I'm generally sick of any news other than math, physics, or IT-related ones). I learned it from [Not Even Wrong](http://www.math.columbia.edu/~woit/wordpress/?p=7247) instead (of course I don't agree with Woit, but some of his links are nice). + +I have no interest whatsoever in the business world, so I have no idea about Warren Buffett's business partners (although I'm still worldly enough to know Warren Buffett). However, the name Charles Munger sounded surprisingly familiar. After reading the sentence + +> Mr. Munger has frequently donated big sums to schools like Stanford and the Harvard-Westlake School. + +from the NYT article linked above, it finally hit me that Mr. Munger is the donor of the Munger Graduate Residence here at Stanford. Munger is really nice, much better than our undergrad residences AFAIK (location-wise Roble is still unbeatable for mathematicians and physicists, although Munger still kicks EV's ass). + +I'm glad to see more and more entrepreneurs funding physics, especially theoretical physics, whether they understand it or not. (**Aside:** Even for laypeople theoretical physics is cool isn't it, like the coolest kid in class. I won't comment on whether math is cooler, but breakthrough mathematical work certainly go largely unnoticed in the public, since theoretical physics is the last thing that laypeople can "vaguely understand". Do some name searches on Google to see how math and physics play out in the limelight — hint: + +* Gauss — 4,130,000 results; +* Euler — 855,000 results; +* Newton — 13,200,000 results; +* Einstein — 6,330,000 results. + +End of aside.) Engaging in physics is plain better than engaging in some questionable philanthropy. (Have you heard that Gates Foundation invested in G4S, the largest private military and security company in the world? I’m not sure about the details — I once read that off a bathroom flyer — but that’s definitely interesting philanthropy.) diff --git a/source/blog/2014-10-25-os-x-package-receipts.md b/source/blog/2014-10-25-os-x-package-receipts.md new file mode 100644 index 00000000..a3695600 --- /dev/null +++ b/source/blog/2014-10-25-os-x-package-receipts.md @@ -0,0 +1,20 @@ +--- +layout: post +title: "OS X package receipts" +date: 2014-10-25 13:26:02 -0700 +comments: true +categories: +--- +I just learned something new. Whenever you install a `pkg` on OS X, OS X stores a receipt of what was installed in `/var/db/receipts` (I'm running OS X 10.9.5 at the time of writing), called a **bom** — bill of materials (I’d rather call it a manifest, whatever). This feature was introduced in NeXTSTEP. From `man 5 bom`: + +> The Mac OS X Installer uses a file system "bill of materials" to determine which files to install, remove, or upgrade. A bill of materials, **bom**, contains all the files within a directory, along with some information about each file. File information includes: the file's UNIX permissions, its owner and group, its size, its time of last modification, and so on. Also included are a checksum of each file and information about hard links. + +`man 5 bom` is actually badly maintained, as it says "The bill of materials for installed packages are found within the package receipts located in /Library/Receipts," whereas those have been migrated to `/var/db/receipts` a long time ago. + +`.bom` files are binary, but you can access the contents via `lsbom`. For instance, to list the files installed, + + lsbom -f /var/db/receipts/org.macports.MacPorts.bom + +Note that the paths printed are always relative to `/`. See `man 1 lsbom` for detailed option listing. + +(Beware when you try to clean up unwanted packages using the `lsbom` listing. Packages might overwrite files, so make sure you review the listing first and know what you are doing. "Knowing what you are doing" is the prerequisite for using `sudo` anyway.) diff --git a/source/blog/2014-10-26-audio-cd-slash-dvd-to-iso-image-on-os-x.md b/source/blog/2014-10-26-audio-cd-slash-dvd-to-iso-image-on-os-x.md new file mode 100644 index 00000000..7ff57bc8 --- /dev/null +++ b/source/blog/2014-10-26-audio-cd-slash-dvd-to-iso-image-on-os-x.md @@ -0,0 +1,30 @@ +--- +layout: post +title: "Convert Audio CD/DVD to ISO image on OS X" +date: 2014-10-26 23:29:47 -0700 +comments: true +categories: +--- +**_11/02/2014 update:_** + +See [this post](/blog/2014/11/02/vobcopy-dvdbackup-etc/) for issues, explanations, and more. + +--- + +Today it occurred to me that I should make clones of my audio CDs (as stand-alone ISO images, I mean, not just rsyncing the AIFFs to subdirectories in `~/aud/lossless`). One can never have too many backups. + +Of course I could simply pack the aforementioned directories with AIFFs into ISOs — that’s not impressive. The end result might actually be the same, but I want to make the clones directly from the original CDs. It turns out that this is not so simple with the Disk Utility GUI — unlike DVDs, the “New Image” option is grayed out for Audio CDs. I’m not sure why, but maybe they want you to just use iTunes to deal with Audio CDs (which works well for all practical purposes — but theoretical curiosity never ends). + +So there comes `hdiutil`. `hdiutil` and `diskutil` are the utilities underlying Disk Utility. Unfortunately, so far I know little about them except for simplest things like `diskutil list`, `diskutil mount`, `hdiutil attach -stdinpass`, etc. (I'm so ignorant about anything filesystem related!) The `hdiutil` verb that makes cross-platform CD or DVD is `makehybrid`, which supports the following filesystem options: `-hfs` (holy crap, no HFS+ please! Apple ought to replace this thirty-year-old filesystem — ZFS or something better please!), `-iso`, `-joliet`, and `-udf`. For Audio CDs you use `-iso` and with `-joliet` extension: + + hdiutil makehybrid -iso -joliet -o AUDIO_CD_NAME.iso SOURCE + +where `SOURCE` can be the mount point, the disk device file, etc. Similarly, although you can create `.cdr` images from DVDs via the Disk Utility GUI, you can also do it with `hdiutil` (which is potentially more portable — I’ve never heard a definitive answer of whether renaming `.cdr` to `.iso` really cross-platform): + + hdiutil makehybrid -udf -o DVD_NAME.iso SOURCE + +This way CSS keys *seem* to be cloned as well, since I was able to authenticate such a CSS-protected DVD with `libdvdread`. + +--- + +P.S. I sincerely hope that one day lossless music tracks are no longer distributed through CD-ROMs. So painful — even my Internet speed is more than ten times faster than the [highest transfer rate](https://en.wikipedia.org/wiki/CD-ROM#Transfer_rates) available from any CD-ROM. (I’ve heard about some websites distributing lossless music digitally, but that won’t happen to the music I care about in the near future.) I still like physical albums though — a real sense of possession. Maybe they should contain the physical goodies and some sort of access codes? diff --git a/source/blog/2014-10-26-disk-visualizer-daisydisk.md b/source/blog/2014-10-26-disk-visualizer-daisydisk.md new file mode 100644 index 00000000..2de4201e --- /dev/null +++ b/source/blog/2014-10-26-disk-visualizer-daisydisk.md @@ -0,0 +1,18 @@ +--- +layout: post +title: "Disk visualizer: DaisyDisk" +date: 2014-10-26 00:02:22 -0700 +comments: true +categories: +--- +DaisyDisk is a pretty famous name. I’ve heard a lot that DaisyDisk is beautiful, but as a “power user” I always feel ashamed about using a disk analyzer or visualizer (although no one really cares). I’m pretty comfortable with doing most filesystem operations right in the shell, and for other tasks too tedious for the shell (like renaming a bunch of files with no obvious pattern), Finder (equipped with TotalFinder) works just fine. + +Today I was trying clean up my drive a bit, as there were only 22GB left. I knew where the main problem lied: a huge number of highres videos lying in `~/vid/staging`, awaiting renaming and migration to my external drive. Anyway, it would be nice to have some visualization of a detailed breakdown of my disk usage, preferably on any level I want without multiple passes of `du`. The name DaisyDisk popped up from the cache in my brain, so I headed over to their website to download it. + +The result is not disappointing at all. Look at this: + +![DaisyDisk screen shot](http://i.imgur.com/vyIwSNQ.png) + +Beautiful. Moreover, functional. It indeed gives me a detailed breakdown on any level, within any directory (given enough priviledge). I can also collect items I don’t want and let DaisyDisk clean up for me at once (not surprising for a disk analyzer); this feature isn’t that useful for me since I know exactly where my queues of unorganized items are — `~/Downloads`, `~/aud/staging`, `~/img/staging`, and `~/vid/staging`. + +By the way, DaisyDisk seems to be WinRAR-free. (Rest assured; I’m a good guy and I *will* purchase a license — these days whether to purchase the website or the MAS version is a headache, though.) diff --git a/source/blog/2014-10-27-onedrive-goes-unlimited.md b/source/blog/2014-10-27-onedrive-goes-unlimited.md new file mode 100644 index 00000000..256b04e6 --- /dev/null +++ b/source/blog/2014-10-27-onedrive-goes-unlimited.md @@ -0,0 +1,23 @@ +--- +layout: post +title: "OneDrive goes unlimited" +date: 2014-10-27 09:44:51 -0700 +comments: true +categories: +--- +**10/28/2014 Update:** + +Yesterday Microsoft pushed an update to OneDrive.app to MAS. After uninstalling, reinstalling, and wiping the 10 GB folder I'd like to sync (that's for photos; I upload videos via the web interface) on both client and server sides, it actually began to work. The speed was around 1 MB/s during my last sync of 10 GB worth of data. Not fast, but I will be fairly satisfied if it can keep up with that speed. Time will tell. + +--- +The OneDrive team just [announced on their blog](https://blog.onedrive.com/office-365-onedrive-unlimited-storage/) that + +> Today, storage limits just became a thing of the past with Office 365. Moving forward, all Office 365 customers will get unlimited OneDrive storage at no additional cost. + +Hell, I hate Microsoft, but **I have to say that this is big**. OneDrive might not be the first to offer unlimited cloud storage (not sure), but it is certainly the first one to roll it out on such a grand scale. Remember, Office 365 is just $99.99 a year. OneDrive might not be the best cloud storage service, especially for OS X customers, but unlimited is unlimited — in comparison, I pay Google $9.99 a month for 1 TB. + +Microsoft products are indeed pure crap on a Mac. Office for Mac 2011 is horrible (speed and bloat aside, some features are simply not implemented — try to import a UTF-8 CSV into Excel, all non-ASCII characters become underscores; Microsoft’s response was that Unicode import was not implemented yet). OneDrive.app is slow like crawl, and it never finishes syncing — always stuck on the last few megabytes, eating up 100% CPU. Thank god I only use it for backups. The web interface is okay, although Google Drive is faster — blazing fast, I can easily upload with 20+MB/s. But again, **unlimited is unlimited.** + +Microsoft is opening a new chapter of cloud storage. In today’s world, we have so many huge video files, so storage limit should indeed be something of the past. This is a smart move for Microsoft — when you have, say, 100 TB up there (which seems very far at this moment, but what if all contents go 4K), and if competitors don’t offer comparable plans, then you are stuck with Microsoft. And Office. Oh my god, Microsoft Office must die. + +Of course I don’t want to be stuck with Microsoft, so I’m looking at how Google and Apple will handle this. Google ought to offer more affordable plans, preferably also unlimited. (My 1Password still has it that I purchased Office for Mac University 2011 on Nov 12, 2012 for $99.99. That turned into a free 4-year Office 365 subscription, which contains 60 world minutes of Skype per month, 1 TB of OneDrive — oh, now it’s about to go unlimited. All for $99.99. In comparison, Google charges me $119.88 per year.) Apple wants you to save everything to iCloud, and it introduced iCloud Drive and Cloud Kit this year — but plans still start at 5 GB and tops out at 1 TB, seriously? Dude, you sell top notch hardware with huge profits; why don’t you pay your customers back by offering better cloud storage — that’s also better for iCloud. diff --git a/source/blog/2014-10-28-google-drive-no-selective-subfolder-sync.md b/source/blog/2014-10-28-google-drive-no-selective-subfolder-sync.md new file mode 100644 index 00000000..2383a0da --- /dev/null +++ b/source/blog/2014-10-28-google-drive-no-selective-subfolder-sync.md @@ -0,0 +1,54 @@ +--- +layout: post +title: "Google Drive — no selective subfolder sync?" +date: 2014-10-28 20:49:24 -0700 +comments: true +categories: +--- +Up to this point I've been using Google Drive as an online backup service, and uploads files mostly manually, although I do sync `~/img` with the client. + +**Aside.** Google Drive, OneDrive, etc. don't work with symlinks. Wanna keep your stuff free and duplicate-free yet still synced to the servers? Use `rsync` to automatically reproduce your folder structure with hard links: + + rsync -avzP --delete --link-dest=SOURCE/ SOURCE DESTINATION + +e.g., + + rsync -avzP --delete --link-dest=~/img/ ~/img/ ~/sync/GoogleDrive/img/ + +automatically reproduces my `~/img` in the `img` directory under Google Drive's root, yet every file in the `~/img` tree is replaced by hard links to the original file, so the new structure within Google Drive takes little additional space — for the directories only. + +That doesn't solve every problem though, as you will see shortly. *End of aside.* + +So, up to this point I’ve mostly used Google Drive as an online backup service by manually uploading (huge) stuff. But at some point I’m gonna automate parts of the uploading. As it turns out, I can keep some smaller video files on my hard drive; other bigger monsters (like movies, TV shows, etc., especially a lot of 1080i Transport Streams directly from TV broadcasts) are uploaded and then moved to external drives. Some of the smaller files, including things downloaded from YouTube, live in `~/vid/etc`, and I want to sync this folder to Google Drive. What's shocking is that this is not possible with the Google Drive client — it only allows selective syncing of the top level folders. Let me repeat this: + +**The Google Drive client only allows selective syncing of the _top level folders_.** + +This is *insane*. It’s almost 2015. Everyone supports this — Dropbox, Microsoft, Box, even Baidu. Google Drive launched on April 24, 2012, that’s 2.5 years ago. [This thread](https://productforums.google.com/forum/#!topic/drive/Gs2w1BL-B9U) on Google Drive Forum, “Ability to sync only selected sub folders”, was posted on August 27, 2012, and has garnered 139 replies. They are ignored by the developers, and the accepted “answer” is to utilize Google Drive’s assign-one-file/folder-to-multiple-folders feature to create a special “sync” directory. Okay, that’s a stupid hard link solution on the server side. Okay, if that works… *No*. “Hard links on the server side” cannot bear different names; so what if I want to sync, say, both `vid/etc` and `aud/etc`? Whoops. So I also have to do all sorts of ugly renaming. NO, GOOGLE, NO, I won't accept that much trouble. + +This kind of insanity makes me wander: + +**Do the Google Drive developers use Google Drive themselves?** + +For now I'm moving `vid/etc` to `vid_etc`. Sigh. + +--- + +There are other problems that I encountered with Google Drive sync today. For one thing, it rejects what I already have and insists on starting from scratch. I mean, say I have uploaded a folder via the web interface, and later wants to keep it in sync (of course that's only possible if it's top level), so I put what I already have there. Nope, Google Drive reports that as a conflict and insists on downloading all the stuff again. Apart from wasted traffic, that also ruins my hard links, so I have to either wipe everything clean on the server side and reupload, or redownload everything and redo all the hard-linking after the download finishes. Either way, very annoying. I opted for the first one. (I later empirically confirmed that OneDrive could handle this situation.) + +There are other problems that don't pop right off my mind. Anyway, the gist is, + +**Google Drive is not yet sync-ready for power users (maybe even laymen).** + +--- + +Then, is OneDrive perfect? No. I know it recently went unlimited, but there’s one major annoyance: the speed. Stanford’s Ethernet speed is almost 1 Gbps UL/DL, but the OneDrive client tops out at about 2 MBps. The web interface isn’t much better. Google Drive is a lot faster than that, which makes it a good backup service for manual uploading. Anyway, maybe OneDrive will improve over time; yesterday they delivered an update to OneDrive.app, and at least it finally works. + +OneDrive also boasts no file status indicators on OS X, which everyone else in the industry has. That’s not a show-stopper though, if you just use it as a backup service. + +--- + +What about Dropbox? Dropbox is a truly awesome sync-ready service, but it is at the same time pretty expensive compared to others. Also, since I use Dropbox as a sync service, it must be up at all times and constantly indexing, so I’m really cautious with what I put there to avoid unnecessary cycles and startup time. + +--- + +Everything is broken in one way or another. Sigh. Let's hope that OneDrive improves a lot in the coming months; if it's stable enough and the speed gets some boost, I'll cancel my Google Drive subscription. I am a Google supporter and Microsoft hater, but a service that *works* is more important than ideology. diff --git a/source/blog/2014-10-28-mou-1-dot-0-fundraiser-goal-reached.md b/source/blog/2014-10-28-mou-1-dot-0-fundraiser-goal-reached.md new file mode 100644 index 00000000..bec69ef7 --- /dev/null +++ b/source/blog/2014-10-28-mou-1-dot-0-fundraiser-goal-reached.md @@ -0,0 +1,17 @@ +--- +layout: post +title: "Mou 1.0 fundraiser: goal reached" +date: 2014-10-28 01:57:06 -0700 +comments: true +categories: +--- +A week ago I wrote a post [*Help Mou hit 1.0*](blog/2014/10/20/help-mou-hit-1-dot-0/). Today, I'm delighted to find out that Mou has reached its goal, $20,000, half way into the fundraiser. + +![Mou hit its goal on Indiegogo](http://i.imgur.com/vM298t5.png) + +So what do I expect from 1.0? Most importantly, + +* CommonMark support, especially GFM fenced code block and saner nested structures; +* Custom Markdown engine (I would like to call [`stmd`](https://github.com/jgm/CommonMark)). + +Whatever the case, Emacs is still my best friend. diff --git a/source/blog/2014-10-29-fun.md b/source/blog/2014-10-29-fun.md new file mode 100644 index 00000000..ce32a9b0 --- /dev/null +++ b/source/blog/2014-10-29-fun.md @@ -0,0 +1,15 @@ +--- +layout: post +title: "Fun" +date: 2014-10-29 11:26:29 -0700 +comments: true +categories: +--- + +This happened in yesterday's Math 210A lecture. + +> Ravi: I won't be here next Thursday.<br> +> Someone: Will there be a lecture?<br> +> Ravi: Yeah, Brian Conrad will give a lecture. Don't tell him, he don't know this yet. + +... diff --git a/source/blog/2014-11-02-vobcopy-dvdbackup-etc.md b/source/blog/2014-11-02-vobcopy-dvdbackup-etc.md new file mode 100644 index 00000000..b8690912 --- /dev/null +++ b/source/blog/2014-11-02-vobcopy-dvdbackup-etc.md @@ -0,0 +1,90 @@ +--- +layout: post +title: "vobcopy, dvdbackup, etc." +date: 2014-11-02 15:06:07 -0800 +comments: true +categories: +--- +A few days ago, I was cloning my entire Audio CD and DVD collection, and reported some of the findings in [this post](/blog/2014/10/26/audio-cd-slash-dvd-to-iso-image-on-os-x/). As said, the most important commands are + + hdiutil makehybrid -iso -joliet -o AUDIO_CD_NAME.iso SOURCE + +for Audio CDs and + + hdiutil makehybrid -udf -o DVD_NAME.iso SOURCE + +for DVDs. + +Those alone don't finish the story. I also tried other things and unfortunately encountered problems. I was too busy to report back then, but now I'll summarize some of the findings. + +--- + +For one thing, `hdiutil makehybrid` might fail, issuing an "Operation not permitted" for no obvious reason. This could even happen when you work with the Disk Utility GUI (for which I once got a "Permission denied"). Even `sudo` didn't help in my case. However, I was able to **circumvent the problem with the root shell** (I won't tell you how to enter the root shell — you need to at least have that amount of knowledge about the root shell before you are given the key). Not sure why. Just keep in mind that the root shell might help (that's also a general, albeit dangerous, advice for life). + +--- + +Next onto grabbing the raw VOB. + +`vobcopy` is pretty sweet, but at least for me it had one huge problem. When I tried to copy a single title, say title #2 with + + vobcopy --title-number TITLE_NUMBER -i SOURCE + +other titles got copied, too. I didn't have enough samples to test out, but presumably it's because the problematic DVD has a structure like this: + +![problematic DVD title structure](http://i.imgur.com/HTgmwQL.png) + +Anyway, no matter I `vobcopy` title 01, 02, or 03, the result was the same — the whole thing. That's pretty stupid. I don't know if it counts as a bug or unfinished feature. Definitely not cool. + +(One cool thing about `vobcopy`: as long as you complied with `libdvdread`, you can create a fully decrypted version of the DVD with + + vobcopy --mirror -i SOURCE + +Of course, to get an iso image out of the decrypted mirror, you run the `hdiutil makehybrid -udf` command given above.) + +--- + +So `vobcopy` is dead (for copying specific titles in unfortunate DVDs). What's next? + +There's `dvdbackup`. The man page is good, and [ArchWiki](https://wiki.archlinux.org/index.php/dvdbackup#A_single_title) is even better (*ArchWiki is awesome!*), providing you cookbook solutions of combining the power of `dvdbackup` and `dvdauthor` (cookbooks are nice when dealing with unexciting technologies like DVD). In fact, `dvdbackup` alone is enough for extracting the VOBs of relatively small titles (< 1GiB): + + dvdbackup -i SOURCE -o VOB_TARGET_DIR -t TITLE_NUMBER -n TITLE_NAME + +then grab your title-specific VOB in `VOB_TARGET_DIR/TITLE_NAME/VIDEO_TS`. Unlike `vobcopy`'s `-n/--title-number` option, `dvdbackup`'s `-t/--title` option does it right, trimming everything else. However, there's a problem when the title is larger than 1 GiB — then `dvdbackup` will split the VOB into several 1 GiB max pieces, and there's no way to disable this (since `dvdbackup` is targeting a DVD player — ancient technology — rather than `mpv` or whatever). What's sadder is that I can't seem to combine the split VOBs with FFmpeg stream copy — `pcm_dvd` audio always gets converted to `mp2` and fails when I use `-c copy`. I'm not a codec expert, but I suppose this is due to the fact that `pcm_dvd` isn't a supported encoding codec of FFmpeg (at least not my FFmpeg): + + > ffmpeg -codecs | grep pcm_dvd + D.A..S pcm_dvd PCM signed 20|24-bit big-endian + +`D` is for "Decoding supported", `A` is for "Audio codec", `S` is for "Lossless compression" — no encoding support. By the way, my FFmpeg is `brew`ed with the options `--with-fdk-aac`, `--with-ffplay`, `--with-freetype`, `--with-libass`, `--with-libbluray`, `--with-openjpeg`, `--with-openssl`, `--with-x265`: + + > \ffmpeg -version + ffmpeg version 2.4.2 Copyright (c) 2000-2014 the FFmpeg developers + built on Oct 19 2014 14:09:36 with Apple LLVM version 6.0 (clang-600.0.51) (based on LLVM 3.5svn) + configuration: --prefix=/usr/local/Cellar/ffmpeg/2.4.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid --enable-libfreetype --enable-libass --enable-ffplay --enable-libfdk-aac --enable-openssl --enable-libx265 --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags='-I/usr/local/Cellar/openjpeg/1.5.1_1/include/openjpeg-1.5 ' + libavutil 54. 7.100 / 54. 7.100 + libavcodec 56. 1.100 / 56. 1.100 + libavformat 56. 4.101 / 56. 4.101 + libavdevice 56. 0.100 / 56. 0.100 + libavfilter 5. 1.100 / 5. 1.100 + libavresample 2. 1. 0 / 2. 1. 0 + libswscale 3. 0.100 / 3. 0.100 + libswresample 1. 1.100 / 1. 1.100 + libpostproc 53. 0.100 / 53. 0.100 + +Maybe I missed some `--enable`. + +Sorry for the digression. So, it's not possible to stream-copy-concat the VOBs with FFmpeg. (In fact, since audio quality is not that important — you won't be able to tell 256k AAC from lossless anyway, especially when you are focusing on the video, so you can always transcode `pcm_dvd` into 256k AAC with `-c:a libfdk_aac -b:a 256k`. `mpeg2video` is an encoding supported codec so stream copy works fine. Or you may also use `flac` or whatever encoding-supported lossless codec.) However, if you insist on getting the original `pcm_dvd`, there is a way, an ugly way. You've gotta be creative here. [ArchWiki](https://wiki.archlinux.org/index.php/dvdbackup#A_single_title) already provides a cookbook solution on how to use `dvdbackup` and `dvdauthor` to create a DVD with a selected title. And `vobcopy` can copy the entire thing just fine, without the 1 GiB limit (make sure to use the `-l/--large-file` option if the size is greater than 2 GiB). Therefore, you can create a DVD with selected title from the original DVD, then `vobcopy` from the new DVD. This is insane, but it works, I've tested that. **Note, however, that timestamps might be wrong with `vobcopy`, so the VOB runs just fine linearly but might run into problems when you seek.** Therefore, FFmpeg is still the way to go. Or maybe you can do it right with one click using some closed source software ☹ — I've heard about success stories with the long ceased DVD Decrypter Windows project. In reality, I guess only people with theoretical interest or OCD will ever do this — FLAC or AAC should serve everyone just fine. It should have worked with `vobcopy` alone, but it doesn't. Hence the workaround. + +--- + +For future reference, I'll translate the ArchWiki cookbook solution here (it's too cookbook itself, specifying paths like `~/movie_name` and using unnecessary `cd`) about creating a title-specific DVD from a multi-title DVD (replace `SOURCE`, `VOB_TARGET_DIR`, `DVD_TARGET_DIR`, `TITLE_NUMBER`, and `TITLE_NAME` with sane values): + + dvdbackup -i SOURCE -o VOB_TARGET_DIR -t TITLE_NUMBER -n TITLE_NAME + dvdauthor -t -o DVD_TARGET_DIR VOB_TARGET_DIR/TITLE_NAME/VIDEO_TS/*.VOB + export VIDEO_FORMAT=NTSC + cd DVD_TARGET_DIR/VIDEO_TS && dvdauthor -T -o DVD_TARGET_DIR + +`export VIDEO_FORMAT=NTSC` is to avoid the `dvdauthor` error of "no default video format, must explicitly specify NTSC or PAL" (I'm not sure about the difference between NTSC and PAL, but I saw NTSC printed on my DVD, so I used it). And there you go, a shiny new DVD filesystem located in `DVD_TARGET_DIR`. (Note that unlike `vobcopy`, `dvdbackup` doesn't feature a nice progress bar even when `-v/--verbose` and `-p/--progress` are specified.) Then you can + + vobcopy -l DVD_TARGET_DIR + +if you'd like to. Recall that timestamps might be wrong, sadly. diff --git a/source/blog/2014-11-05-apple-is-pushing-yosemite-hard.md b/source/blog/2014-11-05-apple-is-pushing-yosemite-hard.md new file mode 100644 index 00000000..0324c26c --- /dev/null +++ b/source/blog/2014-11-05-apple-is-pushing-yosemite-hard.md @@ -0,0 +1,32 @@ +--- +layout: post +title: "Apple is pushing Yosemite hard" +date: 2014-11-05 22:17:01 -0800 +comments: true +categories: +--- +Apple is pushing Yosemite hard and secretly Yosemitizing things. iTunes was updated to its shiny new look on Mavericks, day one upon Yosemite launch. I liked it. The only problems I had with the new iTunes are: + +* It's no longer possible to go to the Artists view and Music Videos view with one click. Artists is now a drop-down menu option in My Music view; Music Videos is now a sidebar tab in Playlists view. + +* The red icon doesn't look as good as the previous blue one, and it doesn't harmonize too well alongside the blue Finder and Mail. Maybe they will look better together in Yosemite. + +Just now I found that the Mac App Store in Mavericks is also Yosemitized: + +![Yosemitized Mac App Store](http://i.imgur.com/T7KIo6s.png) + +The chromeless UI certainly looks a bit weird, and the font seems a bit too small. However, I'm sure I'll quickly get over these shortcomings, and might even grow to appreciate them. Even more so when I finally upgrade to Yosemite (I'll wait for Thanksgiving break or 10.10.1 this year, whichever comes first — looks like the former will come first, considering the first 10.10.1 developer beta was out [merely two days ago](http://www.macrumors.com/2014/11/03/first-yosemite-10-10-1-beta-now-available/), [reportedly](http://i.imgur.com/IVFV7E2.png) focusing on WiFi, etc.). + +On the font side, unfortunately the small font size in the new Mac App Store does make text fuzzy on my non-Retina display. I didn't notice the fuzziness in the new iTunes. Anyway, everything is optimized for Retina — Apple is also secretly pushing Retina hard. In the Apple ecosystem, apparently you always want to stay at the top of the line. Due to Apple's nature, the degree of hardware-software integration is simply unparalleled, so only the latest hardware enjoys all the new features and love. Unfortunately, the typical lifespan of a Mac is much longer than the hardware/software release cycle. + +The truth is, we (or at least I) adapt to things much quicker and easier than we'd imagined (or otherwise would be willing to admit). When iOS 7 was demo'ed on WWDC 2013, I was like "no way! What's with those stupid flat crap?" But when the update came, I hopped on and was happy ever since. (I reserve my opinion on the icon design.) Later when I installed an app that was not updated for the new UI, I was like "OMG, what's with this clumsy UI?" I guess legibility issues, WiFi issues (with Yosemite) and responsiveness issue (with iOS 8 on iPhone 4S — *AT&T, where's my preordered 6 Plus?*) are harder to impossible to adapt to, but they will eventually be fixed, or support-dropped, or hardware-upgraded (with $$). So in the end (almost) everyone will be happy. And one day when we face the old thing again by chance, we'll flee as fast as we can: "what's with that..." (I did exaggerate. Also, I know, many people just don't care about design, or have ill tastes, or can't find the buttons after a minimal number of UI tweaks — I'm not including them in the "we" here.) Those who claim that "I won't upgrade until..." are just kidding themselves, and certainly no one will wait for them or make their "until..." come true. Most of those "won't upgrade" folks will eventually upgrade and be happy. For those that really don't upgrade (like those still running Snow Leopard) either can't afford the new hardware, or suffer from senility (seen from the deep hostility towards new things). Or they have some important legacy software that would break with an upgrade; poor dudes. + +Same goes for newer, shinier, seemingly unnecessary deluxe hardware. I'm still stuck on a Mid-2012 MBP 13" Non-Retina. Retina seems highly useless for me since I do most of my work on the 27" external display — the 13" internal LCD hosts a single maximized Activity Monitor window most of the time (and sometimes also a Transmission window). I expect my opinion about Retina to change after I upgrade to a Retina model next year. (It will be a pretty major purchase though — i7, 16 GB memory, and 512 GB SSD are musts, and there are like a million peripherals to buy — Thunderbolt to Ethernet, Super Drive, etc., etc.) This shift of opinion after upgrading to something unnecessarily good (and ending up not being able to tolerate anything inferior) is best described as "**sadly converted**". Some folks are already sadly converted by the Retina iMac. See, for example, [this article](http://arstechnica.com/apple/2014/11/yes-the-5k-retina-imacs-screen-runs-at-60hz-at-5k-resolution/#p11) on Ars Technica: + +> Prior to getting the Retina iMac on my desk, I would have said that "retina" isn’t necessarily something we need on the desktop. Most people don’t notice pixels on the desktop at a normal seating distance. However, after getting to A/B compare the Retina iMac with the standard one over the course of a week, I’m sadly converted. I just don’t know if my wallet can take the abuse. + +And others are bitterly desiring one (me included — sadly two Mac are too much for a university dorm). See, for example, [this comment](http://arstechnica.com/apple/2014/11/yes-the-5k-retina-imacs-screen-runs-at-60hz-at-5k-resolution/?comments=1&post=27896871) on the aforementioned article: + +> I WISH APPLE TECHNICA WOULD STOP WITH ALL THESE ARTICLES ABOUT THE IMAC. +> +> It really makes me want one unjustifiably. diff --git a/source/blog/2014-11-05-list-youtube-playlist-with-youtube-dl.md b/source/blog/2014-11-05-list-youtube-playlist-with-youtube-dl.md new file mode 100644 index 00000000..a0d8af7b --- /dev/null +++ b/source/blog/2014-11-05-list-youtube-playlist-with-youtube-dl.md @@ -0,0 +1,71 @@ +--- +layout: post +title: "List YouTube playlist with youtube-dl" +date: 2014-11-05 10:37:58 -0800 +comments: true +categories: +--- +Of course you are always welcome to use the [Google APIs Client Library for Python](https://developers.google.com/api-client-library/python/) to wrestle with YouTube, which is usually pretty simple. (As an added bonus, YouTube has some [nice runnable sample scripts](https://developers.google.com/youtube/v3/code_samples/) to get you started.) With the client library, listing videos in a YouTube playlist is a breeze. + +However, if you don't feel like writing code yourself (I usually don't feel like writing code myself until I use something often enough and existing solutions are suboptimal), `youtube-dl` recently added the functionality to list videos in a playlist with the `--flat-playlist` option. + +[According to one of the project collaborators](https://github.com/rg3/youtube-dl/issues/4003#issuecomment-60322630), currently `--flat-playlist` is only helpful with the `-j` option for dumping JSON (so I suppose this feature is subject to change). For instance, `--flat-playlist` alone would emit something like this: + +```bash +> youtube-dl --flat-playlist 'https://www.youtube.com/watch?v=gdOwwI0ngqQ&list=PLPpZI8R1zUfrkDbmJMOBhEbJ9Td9vbV-F' +[youtube:playlist] Downloading playlist PLPpZI8R1zUfrkDbmJMOBhEbJ9Td9vbV-F - add --no-playlist to just download video gdOwwI0ngqQ +[youtube:playlist] PLPpZI8R1zUfrkDbmJMOBhEbJ9Td9vbV-F: Downloading webpage +[youtube:playlist] PLPpZI8R1zUfrkDbmJMOBhEbJ9Td9vbV-F: Downloading page #1 +[download] Downloading playlist: Cam By apinknomfan +[youtube:playlist] playlist Cam By apinknomfan: Collected 119 video ids (downloading 119 of them) +[download] Downloading video #1 of 119 +[download] Downloading video #2 of 119 +[download] Downloading video #3 of 119 +[download] Downloading video #4 of 119 +... +``` + +which doesn't really make sense — it tells you that it collected 119 video ids, and no more. Once you have `-j` on, you get JSON data that you can parse with anything: + +```bash +> youtube-dl -j --flat-playlist 'https://www.youtube.com/watch?v=gdOwwI0ngqQ&list=PLPpZI8R1zUfrkDbmJMOBhEbJ9Td9vbV-F' +{"url": "gdOwwI0ngqQ", "_type": "url", "ie_key": "Youtube", "id": "gdOwwI0ngqQ"} +{"url": "j9l5nchv1Z8", "_type": "url", "ie_key": "Youtube", "id": "j9l5nchv1Z8"} +{"url": "znW5ALwWNQw", "_type": "url", "ie_key": "Youtube", "id": "znW5ALwWNQw"} +{"url": "qyE7-auTIcc", "_type": "url", "ie_key": "Youtube", "id": "qyE7-auTIcc"} +... +``` + +The most straightforward way to parse this is to use a command line JSON parser, the best one being [jq](https://github.com/stedolan/jq): + +```bash +> youtube-dl -j --flat-playlist 'https://www.youtube.com/watch?v=gdOwwI0ngqQ&list=PLPpZI8R1zUfrkDbmJMOBhEbJ9Td9vbV-F' | jq -r '.id' | sed 's_^_https://youtube.com/v/_' +https://youtube.com/v/gdOwwI0ngqQ +https://youtube.com/v/j9l5nchv1Z8 +https://youtube.com/v/znW5ALwWNQw +https://youtube.com/v/qyE7-auTIcc +... +``` + +There you go, a list of URIs you can use. Of course you can put this in a script to save some typing: + +```bash youtube-ls-playlist.sh https://gist.github.com/zmwangx/0245788475f963210ed9 Gist +#!/usr/bin/env bash +# Takes a YouTube URI to a playlist (fairly liberal, it's fine as long +# as the playlist id can be extracted), and prints a list of URIs in a +# YouTube playlist. +# +# Requires youtube-dl 2014.10.24, tested on youtube-dl +# 2014.11.02.1. Feature subject to change. +youtube-dl -j --flat-playlist "$1" | jq -r '.id' | sed 's_^_https://youtube.com/v/_' +``` + +**_Aside:_** I first embedded the gist here, but [it looked a bit off](http://i.imgur.com/m3cr0Im.png). See [imathis/octopress#1392](https://github.com/imathis/octopress/issues/1392). + +> In the next version of the Gist tag plugin we are just downloading the gists and embedding them upon generation so we don't have to worry about GitHub going down and breaking all your gists, or changing the HTML and breaking all the styles. +> +> For the time being I suggest embedding your code snippets directly if you want them to look good. + +Okay. End of aside. + +By the way, `youtube-dl` supports playlist bulk download natively. The reason I need a list of video ids or URIs, however, is that among other things, `youtube-dl` doesn't download highest resolution DASH video by default, so I have to rely on something like `youtube-dl-dash` ([link](https://github.com/zmwangx/sh/blob/master/youtube-dl-dash)) to download the best version. diff --git a/source/blog/2014-11-06-2014-nobel-prize-in-physics-led-lights-seriously.md b/source/blog/2014-11-06-2014-nobel-prize-in-physics-led-lights-seriously.md new file mode 100644 index 00000000..f68ea55d --- /dev/null +++ b/source/blog/2014-11-06-2014-nobel-prize-in-physics-led-lights-seriously.md @@ -0,0 +1,19 @@ +--- +layout: post +title: "2014 Nobel Prize in Physics — LED lights, seriously?" +date: 2014-11-06 11:08:45 -0800 +comments: true +categories: +--- +For some reason, I only learned about this year’s laureates today, through [the reference frame](http://motls.blogspot.com/2014/11/ex-employer-wont-meet-blue-led-nobel.html). The prize goes to the inventors of the LED. Not exciting at all, so I don’t care if I’m ever informed. (Lubos has a good point on why applied physics — well, let’s even widen the concept of applied physics a bit — should not surprise anyone when they appear in a Nobel Prize announcement: “After all, Alfred Nobel might have very well considered his dynamite to be a discovery in physics, too.”) + +The Nobel Prize Physics Awards has been rather amusing in recent years. Partly due to controversy on the theoretical front and no breakthrough on the experimental front, I guess. (The discovery of Higgs was a breakthrough to some extent, but it was totally expected; little physics beyond SM at LHC before LS1 is sad. Since it was totally expected, the award went to theorists — some experimentalists might not be too thrilled.) Just look at the recent list: 2010, graphene (shouldn’t that be chemistry? if material science can be part of physics, then chemistry must surely encompass it, too); 2009, CCD sensor (whatever that means, maybe that’s important); 2008, fibers (okay, I love fast Internet connections). The list actually goes all the way down to early 20th century, but it’s much denser in recent decades. Now one more: 2014, LED. + +> “for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources” + +Wow, energy-saving. I guess next year’s prize will go to the inventors of high efficiency urinals, for being resource-saving. +**I mean, LEDs are great, but not great as physics, much less a physics breakthrough.** + +Let’s compare the applied folks to other laureates: Lorentz, J. J. Thomson, Planck, Einstein, Bohr, Heisenberg, Schrödinger, Dirac, Fermi, Pauli, Born, Lee, Yang, Landau, Feynman, Schwinger, Gell-Mann, Weinberg, ’t Hooft, etc. Not on the same level. + +It’s good to see that a growing body of physicists are (or at least as it seems to me) increasingly reserved about the Nobel Prize, and prizes in general. Same goes for math, with the Nobel Prize replaced by something else. (Employers still love the big titles, I suppose — good to have some star faculty on board.) diff --git a/source/blog/2014-11-07-interstellar.md b/source/blog/2014-11-07-interstellar.md new file mode 100644 index 00000000..8cd93b8f --- /dev/null +++ b/source/blog/2014-11-07-interstellar.md @@ -0,0 +1,16 @@ +--- +layout: post +title: "Interstellar" +date: 2014-11-07 23:56:31 -0800 +comments: true +categories: +--- +Today (I mean November 7, 2014 — it’s technically November 8 at the time of writing) I saw [*Interstellar*](https://en.wikipedia.org/wiki/Interstellar_%28film%29) (IMAX digital) at AMC Mercado 20. I rarely go to movie theaters, less on the release day (film formats of *Interstellar* were released on November 5, and digital formats followed on November 7). However, reviews of it were positive (from the physics community), and I really need a way to release stress these days — I wasn’t in the right condition for months. So I figured I’d just spend an afternoon in front of the big screen. + +I’ve heard good things about IMAX 70mm film, like no trailers, among other things. But I didn’t bother the extra seven miles’ drive to Hackworth IMAX Dome, so I landed on IMAX digital. Still a pretty stunning experience, although I ended up watching forty minutes worth of trailers before the real thing (arrived fifteen minutes early). In hindsight maybe I’d be better off spending some extra time on the road. + +Speaking of the movie itself, they tried to make it as plausible as possible from a physics viewpoint, by involving [Kip Thorne](https://en.wikipedia.org/wiki/Kip_Thorne) of Caltech. (There’s even a book out, *The Science of Interstellar*.) For instance, the wormhole isn’t portrayed as a hole (I mean, imagine a ring in empty space, as you would picture in your mind when you hear the word “hole” out of nothing; that’s not how it looks like here, and there’s a nice explanation). Some of the physics still doesn’t work out quite well, though, like Cooper’s not crushed at the singularity… Also there are communications from within the horizon and breakdown of causality, explained in the movie as happening in four dimension slices of some five dimensional spacetime where time could be a physical dimension. Obviously these are needed to get the sci-fi story going, so no one should be blamed. + +I was there for the sci-fi and physics, but a nice surprise was that I was also touched by the humanity elements of story, I mean, family and stuff. + +The downside: Cooper’s accent was a bit hard on me ☹ diff --git a/source/blog/2014-11-10-average-phone-plan-in-the-u-dot-s-costs-ten-time-as-much-as-that-in-the-u-dot-k.md b/source/blog/2014-11-10-average-phone-plan-in-the-u-dot-s-costs-ten-time-as-much-as-that-in-the-u-dot-k.md new file mode 100644 index 00000000..1bce5027 --- /dev/null +++ b/source/blog/2014-11-10-average-phone-plan-in-the-u-dot-s-costs-ten-time-as-much-as-that-in-the-u-dot-k.md @@ -0,0 +1,12 @@ +--- +layout: post +title: "Average phone plan in the U.S. costs ten time as much as that in the U.K." +date: 2014-11-10 11:11:46 -0800 +comments: true +categories: +--- +To quote [Opera News](http://blogs.opera.com/news/2014/11/mobile-data-anyway/), + +> According to research by the International Telecommunication Union, the average phone plan with 500MB of data costs $85 in the United States, compared to $24.10 in China and $8.80 in the United Kingdom. + +Holy shit! diff --git a/source/blog/2014-11-11-re-encoding-everything-for-iphone-6-plus.md b/source/blog/2014-11-11-re-encoding-everything-for-iphone-6-plus.md new file mode 100644 index 00000000..51d5ed37 --- /dev/null +++ b/source/blog/2014-11-11-re-encoding-everything-for-iphone-6-plus.md @@ -0,0 +1,8 @@ +--- +layout: post +title: "Re-encoding everything for iPhone 6 Plus" +date: 2014-11-11 13:31:25 -0800 +comments: true +categories: +--- +AT&T finally delivered my iPhone 6 Plus (silver, 64 GB) after about fifty days since preorder… The 1080p Retina display is simply stunning. However, it turns out that my old videos don’t work so well on 6 Plus’s giant screen. My old mobile video collection was optimized for my 16 GB 4S, targeting the small screen and highly limited storage — you guessed it, they were resized to 960x540, and they looked great. But they’re not up to the task any more. 960x540 videos aren’t at all sharp on the stunning screen of 6 Plus, which is more than capable of handling 4x pixels. Therefore, I have no choice but to feed it more pixels. I’m left in a weird situation, where my 1080p desktop (or even HDTV) quality videos should fit the screen just fine, but H.264 profile stands in the way. iPhone 6 and 6 Plus are only capable of High Profile level 4.2, so anything encoded in level 5.1, for instance, needs to be re-encoded. Also there are still MPEG-2 and MPEG-4 videos out there (MPEG-4 should be obsolete by now, I assume, but some people still use it; and MPEG-2 is de facto in TV broadcasts), which have to be transcoded. Okay, it’s a daunting task to re-encode a fairly big collection, but I have to do it sooner or later. Presumably this weekend. I’ll also report whether 720p videos look sharp on the screen later. diff --git a/source/blog/2014-11-19-convolution-of-irreducible-characters.md b/source/blog/2014-11-19-convolution-of-irreducible-characters.md new file mode 100644 index 00000000..f3f3207e --- /dev/null +++ b/source/blog/2014-11-19-convolution-of-irreducible-characters.md @@ -0,0 +1,16 @@ +--- +layout: post +title: "Convolution of irreducible characters" +date: 2014-11-19 20:40:37 -0800 +comments: true +categories: +--- +__*TL; DR:* The actual PDF write-up is [here](https://dl.bintray.com/zmwangx/generic/20141119-convolution-of-irreducible-characters.pdf).__ + +--- + +Yesterday I was trying to establish the formula for orthogonal primitive central idempotents of a group ring. It is possible to establish the result through the convolution of irreducible characters. However, I stuck quite a while on trying to work out the convolutions themselves. For a formidable and unenlightening proof using "matrix entry functions" (i.e., fix a basis, induce a matrix representation, and explicitly expand everything in matrix elements), see [this post](http://drexel28.wordpress.com/2011/03/02/representation-theory-using-orthogonality-relations-to-compute-convolutions-of-characters-and-matrix-entry-functions/) (in fact, this is just one in a series of posts that lead up to the result). That's a really sad proof. + +It turns out that I really should have been working the other way round --- first establish the orthogonal idempotents (the proof of which is really simple and elegant, I was just trapped in a single thread of thought), then use that to compute the convolution of irreducible characters. + +I feel like this is worth presenting (as the only proof I saw online is the really sad one above), so I TeX'ed it up. I tried to convert to MathJax HTML but eventually gave up (that's the story for another post). So, the write-up is in good ol' PDF, available [here](https://dl.bintray.com/zmwangx/generic/20141119-convolution-of-irreducible-characters.pdf). diff --git a/source/blog/2014-11-20-dropbot-for-geeks(r).md b/source/blog/2014-11-20-dropbot-for-geeks(r).md new file mode 100644 index 00000000..ad075302 --- /dev/null +++ b/source/blog/2014-11-20-dropbot-for-geeks(r).md @@ -0,0 +1,26 @@ +--- +layout: post +title: "Dropbot for Geeks®" +date: 2014-11-20 09:48:15 -0800 +comments: true +categories: +--- +I propose the following cloud storage and syncing service model of the future. I call it **Dropbot for Geeks®**, and it totally rules. It's designed for geeks who are tired of the highly limited, miserably unproductive traditional services (based on clicking around). It has the following features: + +* Standard Unix file system commands exposed as an API, e.g., `cat`, `cd`, `cp`, `du`, `df`, `file`, `find`, `head`, `ln`, `ls`, `mkdir`, `mv`, `pwd`, `rm`, `rmdir`, `tail`, `touch`, etc. + +* A rudimentary shell emulator through the web interface exposing the commands above. + +* Secure shell access to the file system, also exposing the commands above. Provide two-factor auth for SSH. Clearly, `scp` should also be supported. + +* Checksums. Expose, for instance, `md5sum` or `sha1sum`, in the API. Provide checksums on download pages, probably on demand. + +* Programmable selective syncing, down to per file level. + +* Scriptability. Allow clients to run custom scheduled jobs or daemons with the API above. To prevent the service from becoming full-featured IaaS, though, clients might be limited in CPU time, memory, or command selection. This bullet point is arguable. + +--- + +With the level of command line integration illustrated above, we'll finally get rid of clicking around and not being able to automate chores. Navgating the remote file system will be a breeze — click, click, click, click, click (sometimes click should be replaced by double click, which is even more painful) just to navigate to a directory will be made a thing of the past. `ln`, in particular, saves disk space for duplicates — Dropbot for Geeks does *not* want to charge you extra for multiple copies of the same file in different directories. (To facilitate syncing hardlinks, clients should be able to specify hardlinked files in a config file. Or maybe some better mechanism. This might be hard.) At last, checksums are a must. I’ve had traumatic experiences like having downloaded an eight-part RAR, 1 GiB each, only to find that it wouldn’t unRAR. Without checksums, it was impossible to find which part was corrupted. As a result, I had to re-download everything — a nightmare. I never want to experience similar problems again. Hence the precious checksums. + +Dropbot for Geeks looks like a pretty good (well, not really, but at least pretty cool®) model. Maybe I should patent it before anyone else? Then if some similar service surfaces in the future, I can [sue their ass off and enjoy some hot cash](http://arstechnica.com/tech-policy/2014/11/jury-apple-must-pay-23-6m-for-old-pager-patents/). diff --git a/source/blog/2014-11-24-iphone-photography-frustration.md b/source/blog/2014-11-24-iphone-photography-frustration.md new file mode 100644 index 00000000..1a375f25 --- /dev/null +++ b/source/blog/2014-11-24-iphone-photography-frustration.md @@ -0,0 +1,32 @@ +--- +layout: post +title: "iPhone photography frustration" +date: 2014-11-24 12:42:25 -0800 +comments: true +categories: +--- +**TL; DR:** Jump to the paragraph “In the end…” + +--- + +I'm not a photo-savvy guy. I've never taken a single selfie in my life, and my iPhone Photos app witnesses about twenty photos a year — fifteen of which are accidental screenshots. Okay, a little bit of exaggeration. The iPhone, aka iCamera, especially the 6 Plus model, is sort of a waste in my possession. However, my grandparents came to visit me on campus yesterday, and my grandma *loves* photos. So I took one with them, using my phone (which is obviously better than my grandma’s digital camera). The lighting was wrong, but I was able to fix that in five seconds — clicked “Edit”, clicked on the little sun icon (whatever it is called), clicked on “Light”, slided the slidebar all the way to the right, and all of a sudden it looked perfect. (This is the first time I ever clicked those, but I got it nicely done in five seconds — very intuitive, deeply impressed.) From a layman’s perspective, the builtin edit feature of the Photos app is really smart. Later I tried to reproduce the same edit with iPhoto, but I had to manually wrestle with exposure, highlights, shadows, brightness, contrast, etc., and I just never got it right. Call me a moron if you feel like it, but I’ve already given you the context at the beginning of this article, so you can’t blame me. (For your reference, I do ’shop some images for fun and profit from time to time, but I never deal with actual unedited photos, so I never have to worry about exposure and stuff.) + +So far so good. The frustation began when I try to import the photo to my Mac. I’m certainly not a mobile guy who keeps everything on his phone or the cloud. I have the photo import feature of Dropbox turned on, so as I plugged in my phone the photo already appearred in the Camera Upload folder. Wait what, two copies? Two copies that looked exactly the same, albeit one is about 0.1 MB larger than the other? Not cool. Pulled up Image Capture; Apple’s own software should do the trick, I guess. Nope, same thing. Googled, found this article on support.apple.com: [*iOS: Edited photos show original photo after import or in other apps*](http://support.apple.com/en-us/HT203612). Okay sure, they know this. Let's listen to what they have to say. + +> Apple uses Extensible Metadata Platform (XMP), a standard created by Adobe, for nondestructive photo editing. XMP allows you to undo edits and to revert back to an original photo without the loss of quality. Displaying the edited photo requires OS X v10.9 or later and software that can read XMP. The following applications support XMP: +> +> * iPhoto 9.5 or later +> * iPhoto for iOS +> * Aperture 3 or later +> * Adobe Lightroom +> * Adobe Photoshop Elements +> +> Other photo-management applications and some iOS apps may also display XMP. + +Adobe Lightroom, oh well. I don’t yet have Adobe CC installed — Photoshop is certainly a powerhouse (plus Illustrator, InDesign, etc. are nice to have at times), but Pixelmator works fine for me most of the time, and I’m happy to go to a library iMac or Mac Pro when I do need the power; $29.99/$19.99 is just too much if I use it twice a month. I theorize that I can indefinitely extend the free trial by running CC in a VM, reverting to a clean snapshot and regenerating the MAC address from time to time, but that’s only a theory — I haven’t got the incentive to test it out. As for iPhoto, I know it’s pretty lame from past experience, and I was very much baffled by the ugly and unintuitive UI, so normally I don’t even want to waste disk space on it; but since it’s only official solution other than expensive Adobe products and Aperture, I decided to install it. 1.7 GB gone. *So does the wasted 1.7 GB do the trick? Sadly, no.* Still the same thing in iPhoto 9.6, which is clearly “iPhoto 9.5 or later”. Totally baffled. + +In the end, I came up with an ugly solution. Just email or iMessage the photo to yourself from the phone. If you use email though, be sure to use the builtin Mail, or you would likely lose Exif data (I sent from Mailbox and lost Exif; I then sent from Mail and didn’t). This is really annoying since setting up accounts in the builtin Mail app is not fun — Google 2FA is not supported, and I have to generate an app-specific password. Compare to Mailbox, where *I signed into Dropbox on the new phone and within five seconds all of my ten email accounts are ready to use*. The loss of Exif data is probably related to [this thread on SO](http://stackoverflow.com/questions/20763814), but I didn’t delve into it since I’m not a Cocoa Touch dev. What’s more confusing, sending the photo via different applications result in different filesizes. The one sent from Mail (I chose original size when I sent it) was a 2 MB JPEG; the one sent from Mailbox was a 4.5 MB JPEG (without the right Exif); and the one sent via iMessage and later opened from the Messages app on Yosemite could be saved as a 10 MB lossless PNG (Exif was there). I went with the 2 MB one in the end. + +I don’t know if iCloud Photo Library will solve the problem in the end. From my perspective, *Apple should train Preview and QuickLook to recognize their XMP technology.* Seriously, they talk about continuity, and I expect to enjoy my enhanced photos on my Mac without going through all the hessle and confusion. Photo enhancing is already such a breeze, thank you; now make sharing and archiving easy, at least within the Apple ecosystem. (Although I’m not familiar with photography and image editing in general, considering how tech-savvy I am, I bet most users can’t even figure out one annoying solution.) + +By the way, Continuity and Handoff only work intermittently for me, and AirDrop between my iPhone (6 Plus) and MacBook Pro (mid-2012, model 9,2) doesn’t work at all. Continuity and Handoff sometimes turn up when they are unexpected (and serve as kinda nice surprises), but when I try to nail them, they remain elusive. Not a big deal for me, but certainly not the most pleasant thing ever. I bet these have to do with the fact that my Mac is connected to Ethernet and sharing the Ethernet connection to my iPhone; these days they expect everyone to be on Wi-Fi (and ironically they messed up big in Yosemite), but Wi-Fi simply can’t beat the speed and stability of Ethernet. I didn’t bother to test whether the features work when my devices are connected to the same Wi-Fi network; even if they do, that’s not my production setup, so they’re still useless to me. diff --git a/source/blog/2014-11-24-why-i-abandoned-mathjax-and-fell-back-to-pdf.md b/source/blog/2014-11-24-why-i-abandoned-mathjax-and-fell-back-to-pdf.md new file mode 100644 index 00000000..10f105d8 --- /dev/null +++ b/source/blog/2014-11-24-why-i-abandoned-mathjax-and-fell-back-to-pdf.md @@ -0,0 +1,12 @@ +--- +layout: post +title: "Why I abandoned MathJax and fell back to PDF" +date: 2014-11-24 20:54:36 -0800 +comments: true +categories: +--- +Recently I wrote an expository article, [*Convolution of irreducible characters*](/pdf/20141119-convolution-of-irreducible-characters.pdf), and posted it [here](/blog/2014/11/19/convolution-of-irreducible-characters/). At first I intended to use MathJax, but in the end I fell back to good ol' PDF. Here's why. + +In short, I'm a mathematician. I write math *articles*, not just standalone expressions or formulas. I use AMSLaTeX to its fullest (not really, but at least I use numbering and the `amsthm` package to its fullest). HTML simply wasn't designed for this. Here are two influential markup languages designed for totally different use cases, and bridging them is painful. I tried to use `pandoc`, but it doesn't support `\input`, doesn't support `\def`, and swallows `\begin{theorem} \end{theorem}`, among other things. I tried to use `htlatex`; even the MathML output is suboptimal, with many math symbols translated to non-math (apart from totally human-unreadable), and it uses its custom CSS files that don't play well with everything else. I tried other things. In the end I gave up. Maybe I don't know enough about MathJax, but I certainly don't want to write a translator myself. Leave LaTeX alone. Distribute as PDF. MathJax may be great for Wikis (like Wikipedia) and for math lite blogs, but it's no replacement for real, beefy LaTeX. It's not for mathematicians who want to distribute real articles. + +By the way, Terry Tao and others use [Luca's LaTeX to WordPress, aka LaTeX2WP](http://lucatrevisan.wordpress.com/latex-to-wordpress/) for math blogging. From Terry's experience it works fairly well. I don't know if `amsthm` and `\def` are in the feature set, though. Anyway, since WordPress handles LaTeX as pre-compiled images (which is also the default on Wikipedia, and which looks poor in general and plays horribly with scaling), LaTeX2WP won't help MathJax users the slightest. diff --git a/source/blog/2014-11-25-i-got-16-gigs-of-ram.md b/source/blog/2014-11-25-i-got-16-gigs-of-ram.md new file mode 100644 index 00000000..d55a642e --- /dev/null +++ b/source/blog/2014-11-25-i-got-16-gigs-of-ram.md @@ -0,0 +1,20 @@ +--- +layout: post +title: "I got 16 gigs of RAM" +date: 2014-11-25 16:28:30 -0800 +comments: true +categories: +--- +Today I upgraded the RAM of my MacBook Pro mid-2012 to 2x8GB. I purchased the [Crucial 16GB Kit (8GBx2) DDR3/DDR3L 1600 MHz (PC3-12800) CL11 SODIMM 204-Pin 1.35V/1.5V Memory for Mac CT2K8G3S160BM](http://smile.amazon.com/dp/B008LTBJFW) from Amazon, which cose me $146.64 after tax. I followed the [official guide](http://support.apple.com/en-us/HT201165) as well as the [iFixit guide](https://www.ifixit.com/Guide/MacBook+Pro+13-Inch+Unibody+Mid+2012+RAM+Replacement/10374). To finish the job I needed a Phillips #00 screwdriver and a spudger, so I purchased the [spudger](https://www.ifixit.com/Store/Tools/Spudger/IF145-002) and the [54 bit driver kit](https://www.ifixit.com/Store/Tools/54-Bit-Driver-Kit/IF145-022-1) from iFixit. + +The actual process was pretty simple. I had a little bit of hard time pulling out the bottom module and pushing in the top module, but overall it was smooth. The only stupid thing I did was that I forgot to push the battery connector back in before I closed the case; I only realized this when I was screwing in the eighth screw (that was a close one!), and had to unscrew everything again. + +After I replaced the RAM modules, booting was just normal. And now I've got 16 gigs of RAM! + +![](http://i.imgur.com/PGhdEGr.png) + +Want to run multiple memory hoggers *along with a Windows VM* (with 4GB of RAM)? No problem. + +![](http://i.imgur.com/czDcVaK.png) + +By the way, Yosemite is indeed really aggressive at RAM usage. I reserve my opinion on whether there's a memory leak. But so far the performance has been fine, even with 8GB of RAM. diff --git a/source/blog/2014-11-26-original-images-in-day-one-journal.md b/source/blog/2014-11-26-original-images-in-day-one-journal.md new file mode 100644 index 00000000..c738699c --- /dev/null +++ b/source/blog/2014-11-26-original-images-in-day-one-journal.md @@ -0,0 +1,42 @@ +--- +layout: post +title: "Original images in Day One journal" +date: 2014-11-26 00:22:16 -0800 +comments: true +categories: +--- +**TL; DR:** Jump to the paragraph beginning with “workaround”. + +--- + +I started a Day One journal two days ago. I've heard good things about Day One, but after using it for a dozen entries, I'm not that satisfied. For one thing, the editor is pretty horrible — keybindings aside, I can’t even find and replace?! And the overview doesn’t look very pretty if you are a heavy Markdown user (i.e., you have a lot of markups, e.g., italics and inline links) — the markups are displayed as is. Moreover, I can’t even `#` my title: it kills the bold font rendering in the bird’s-eye view. What a let down. Anyway, it’s better than nothing, and I hope it will help me keep on track. (I used to manage a Markdown journal in an encrypted sparse bundle, and it was a pain in the ass — mentally. Maybe some GUI sugar is necessary, although Day One is certainly not as pretty as advertised.) Also a private journal means more privacy — I certainly don’t want to publish everything I write on this public blog. + +Too much irrelevant talking. Onto one of the most annoying “features”, and the subject of this post. Images are automatically JPEG-compressed when imported into Day One. See [this support article](https://dayone.zendesk.com/hc/en-us/articles/200145875-Are-photos-resized-when-imported-into-Day-One-), which says: + +> Every photo imported into Day One is converted to JPEG format and resized to a maximum resolution of 2100 x 2100 pixels. The aspect ratio is maintained. We resize photos for more efficient sync and storage. At this size the average photo is about 700KB which means you can store: +> * Dropbox: 2,500+ photos using the free 2GB account +> * iCloud: 6,000+ photos using the free 5GB account + +What the heck. Dude, who cares about storage these days? And transfer rate? I have a gigabit Ethernet. I certainly have much more than 2GB in my Dropbox. Even for those underpriviledged folks with only 2GB, remember, Day One allows *only one photo per entry*. That’s 2,500+ entries. At any rate, this should be an opt-in rather than an uncustomizable “feature”. I’m about to submit a ticket, but I doubt the outcome (I’m sure many people have submitted tickets about the plain text format even when password-protected, but so far, no response). + +With photos, most of the time JPEG compression works pretty well (but people surely want to keep photos in highest quality). However, I’m a techie guy, and my images are often screenshots or precision images, where JPEG compression totally ruins the sharpness. + +Workaround? Simple (yet a bit annoying). Day One lets you show the photo in Finder. So just go ahead and replace that compressed image with the original using `cp` or `mv`. I shouldn’t have needed to do this, but every piece of software comes with some annoyances. Overall Day One’s pretty good — at least it does what it was designed for, albeit not perfectly. + +--- + +By the way, here is my support ticket: + +> I understand that Day One does JPEG compression to every imported photo, as written in the support article “Are photos resized when imported into Day One?” http://goo.gl/Rzi017 . Yet I beg an option. The reason is that the benefits outlined in the support article are virtually non-existent: +> +> * “More efficient sync and storage”: these days transfer rates are really high with SSDs and gigabit Ethernet, so reducing a few hundred KBs won’t help me the slightest; +> +> * “Dropbox: 2,500+ photos using the free 2GB account”: I have much more than 2GB in my Dropbox; even if I only have 2GB, Day One allows only one photo per entry, and that means 2,500+ entries with photos, which is more than enough for most users, I suppose. By the way, Dropbox storage shouldn’t be your concern; people will buy more when they need more. +> +> And some of the bad things about JPEG compression: +> +> * JPEG compression usually works pretty well with photos, but when I import high precision images, the sharpness is totally ruined; +> +> * People want to keep their photos in highest quality, which is defeated by forced compression. +> +> I know an ugly workaround, which is simply replace the compressed image with the original in the filesystem. But I would love to see an option to import images as originals (in fact, compression should be an opt-in). Really, transfer rates and storage grow so rapidly that they are not people’s primary concerns anymore. (For your information, OneDrive recently rolled out truly unlimited storage to Office 365 subscribers. Online storage is that cheap.) Thanks. diff --git a/source/blog/2014-11-28-given-infinite-time.md b/source/blog/2014-11-28-given-infinite-time.md new file mode 100644 index 00000000..833b4de4 --- /dev/null +++ b/source/blog/2014-11-28-given-infinite-time.md @@ -0,0 +1,8 @@ +--- +layout: post +title: "Given infinite time" +date: 2014-11-28 00:18:19 -0800 +comments: true +categories: +--- +Given infinite time. There's so much I can do *given infinite time*. I don't think I'll ever be bored. But sadly the time assigned to each human being is finite. Actually it's epsilon, epsilon approaching zero. Sadly. diff --git a/source/blog/2014-11-28-going-diceware.md b/source/blog/2014-11-28-going-diceware.md new file mode 100644 index 00000000..e649f374 --- /dev/null +++ b/source/blog/2014-11-28-going-diceware.md @@ -0,0 +1,12 @@ +--- +layout: post +title: "Going Diceware" +date: 2014-11-28 19:05:59 -0800 +comments: true +categories: +--- +Today I'm officially going [Diceware](http://world.std.com/~reinhold/diceware.html). I published my simple C implementation of diceware on [GitHub](https://github.com/zmwangx/diceware). + +I've been using 1Password for a couple years now, and I've always been a bit worried about my master password. It's a ~30 byte monster with uppercase, lowercase letters, numbers, and special symbols. By any measure it is very safe. The problem is there are (extremely) personal things in there. I assembled several unrelated things that I (secretly) hold dearest to my heart, obfuscated them with rules not found in best64, and mixed with semi-gibberish. My daily login password is a combo similar in nature, with less obfuscation to facilitate typing. People who dig really deep into my identity might be able to compromise it (or not); I'm afraid that I'm more predictable than I thought I was. I know, the worry is pretty much unwarranted, as I’m not likely the target of a focused attack — I’m neither rich nor equipped with sensitive information or power, and for wide-range exploits, 99.9% of people are lower-hanging fruits. Even for a targeted attack, [xkcd 538: Security](http://xkcd.com/538/) broke a crypto nerd’s imagination with a $5 wrench. However, a geek is a geek, you can’t block a geek’s imagination. + +Therefore, after worrying for so long, today I’m going Diceware. Eight diceware words give you at least 100 bits of true entropy. Unfortunately I don’t have a die, and don’t bother to get one. (Amazon Prime: get it Monday? No. Target, six miles away? No.) So I read my random bits from `/dev/urandom`. The C implementation is [here](https://github.com/zmwangx/diceware). By publishing this I’m announcing to the world that I’m using diceware. But I’m not afraid, since I’m now protected by true entropy that’s not compromised by publishing the scheme. diff --git a/source/blog/2014-11-30-opera-style-advanced-keyboard-shortcuts-in-safari.md b/source/blog/2014-11-30-opera-style-advanced-keyboard-shortcuts-in-safari.md new file mode 100644 index 00000000..f76879f3 --- /dev/null +++ b/source/blog/2014-11-30-opera-style-advanced-keyboard-shortcuts-in-safari.md @@ -0,0 +1,45 @@ +--- +layout: post +title: "Opera-style advanced keyboard shortcuts in Safari" +date: 2014-11-30 17:20:20 -0800 +comments: true +categories: +--- +I've been using the Chromuim Opera for a long time, after Chrome's design went unbearably ugly around v32 (IIRC Opera stable channel was on v19 when I switched, which was released on January 28, 2014). From then on, Opera's [advanced keyboard shortcuts](http://help.opera.com/opera/Mac/1583/en/fasterBrowsing.html#advanced) has become an integral part of my browsing habit. In particular, the following are especially handy for me: + +* `1`: Cycle left through tabs; +* `2`: Cycle right through tabs; +* `/`: Find on page; +* `Z`: Go back one page; +* `X`: Go forward one page; +* `0`: Zoom in; +* `9`: Zoom out; +* `6`: Reset zoom to 100%. + +Lately, with the Yosemite release, Safari has become a much more competitive browser. I won't say why, and I admit that it has major missing features that still prevents it from becoming my default — but I have to say I’m gradually moving more of more of my browsing, especially reading, to Safari. It would be nice if I could carry my power user shortcuts with me. Fortunately, this is possible. Just modify the plist in the following way: + +```bash safari-advanced-keyboard-shortcuts.sh +#!/usr/bin/env bash +defaults write com.apple.Safari NSUserKeyEquivalents '{ +"Actual Size"="6"; +"Back"="z"; +"Find..."="/"; +"Forward"="x"; +"Show Previous Tab"="1"; +"Show Next Tab"="2"; +"Zoom In"="0"; +"Zoom Out"="9"; +}' +``` + +Relaunch Safari. You are all set! Enjoy the ultrafast single key navigating experience. To reset, + +```bash +defaults delete com.apple.Safari NSUserKeyEquivalents +``` + +--- + +**_2014/12/22 Update:_** + +There's one caveat to this approach — unlike in Opera, where the default layman shortcuts (e.g., ⌘F) are still available when advanced keyboard shorts are enabled, in Safari they are simply overwritten. This is annoying when the web page or web app binds certain keys, especially `/` to its own search bar (a notable example being google.com). In that case I have to admit defeat and click on the menu bar item, which takes a hundred times as long as a single `/` keystroke. diff --git a/source/blog/2014-12-05-distraction-free-writing.md b/source/blog/2014-12-05-distraction-free-writing.md new file mode 100644 index 00000000..122f6fc2 --- /dev/null +++ b/source/blog/2014-12-05-distraction-free-writing.md @@ -0,0 +1,34 @@ +--- +layout: post +title: "Distraction free writing" +date: 2014-12-05 21:09:10 -0800 +comments: true +categories: +--- +This is not the first time that a distraction free writing app is featured on the Mac App Store. This time the candidate is [Desk](https://itunes.apple.com/us/app/desk/id915839505?mt=12). The official website is [here](http://desk.pm), but licensing is MAS-exclusive. The icon looks like this: + +![](http://i.imgur.com/OprXSEU.png) + +Skeuomorphism, oh man. And this is the only screenshot I can find on the official website: + +![](http://i.imgur.com/WBaYzho.png) + +I can find a few other screenshots on MAS, but you know how shitty MAS screenshots are, plus the screenshots of this app only focus on specific UI elements. The official website also features an intro video (which provides no information at all) and a brief feature list with no further details, all on one page. The MAS description is somewhat more comprehensive, but again, "WordPress integration" and the like are not so informative. So, after a certain amount of research, I have to say I know little about this app. To do the app justice, there's an [accompanying blog](http://blog.desk.pm), with all kinds of noise though — like what a good blog should be, no complaint about that. So I guess anyone who wants to know more about this app should go digging there. Not me, so I didn't read. + +Strangely enough, reception is great, although the price tag is currently set at $30 — definitely a premium price. John Gruber [has a piece](http://daringfireball.net/linked/2014/11/22/desk), but I think "My thanks to Johb Saddington for sponsoring this weeks' DF RSS feed to promote Desk, his blogging app for the Mac" kind of defeats credability. MAS featuring is also a good sign (although not always). Out of the 55 MAS ratings at the time of writing (9:42 PM, Friday, December 5, 2014), 45 are five stars. + +That brings home my curiosity about "distraction free writing apps" in general. _Why would anyone pay $30 for a "distraction free writing app" (which basically justifies any lack of feature — "we deliberately give you no choice for anything so you can focus on writing!"), **without even a trial**?_ MAS is such a bad model for utility and productivity software since you can't just look at five screenshots (seriously?) and decide "this is for me!" Yet I have the impression that more developers prefer this model nowadays, especially in this focused-writing business, another example being IA Writer. Sure it makes licensing and combating piracy simple, but again, I need to feel it to decide if it's the right tool for me, especially for a feature-deprived focused-writing app. (This is a general thought — in this case I don't need to feel it to tell that it's not for me.) + +More specifically, let's think about distraction free writing. What does IA Writer, or Desk, or other apps offer that's not already available to you with your OS? They support Markdown syntax highlighting, or even WYSYWYG (but only the very simple kind of WYSYWYG limited by the Markdown feature set), sure. They support some select-and-click type of formatting (by the way, Desk's formatting tools look a lot like those found on medium.com), which is good for some who are not competent enough to type simple markups, I guess. Desk supports drag-and-drop of media (although I'm sure it's limited to certain platforms and not portable at all — I always upload images to Imgur and embed the Imgur links, which is super simple for me since I have several homemade scripts to take care of that). So are these features essential? Not at all. For the general public, plain Markdown without rendering should be more than enough, since Markdown was designed to be human-readable as plain text in the first place. Markdown only gets ugly when you have a lot of inline hyperlinks, or worse still, plain HTML tags, but that's not what I would expect from the general public. The technical population who do probably need the rendering, on the other hand, aren't the target audience of these apps; certain needs of the technical folks are hardly ever addressed by these feature-deprived focused-writing apps — e.g., where are my keybindings (full-featured, not just C-k, C-y, C-p, C-n, C-b, C-f, M-del, etc.; in particular, what about M-d, M-b, etc.)? What about custom Markdown engine? What about Jekyll integration (no need for that, actually — I'm happy with tty)? So, to sum up, for the target audience, realtime rendering isn't necessary, although I guess people with technophobia hate to see markups like `**` so no rendering will kill them. Second point, select-and-click type of formatting, is already dismissed. Third point, drag-and-drop of media might be useful for some people, but not all. After all, Desk uses a typewriter as its icon, and there's no way you could throw photos into your typewriter. It's about writing, and most of the time writing is enough. + +I have dismissed the "additional features" of focused-writing apps as non-essential. And I can argue that they are actually sources of distraction — as soon as you have WYSYWYG and formatting and mouse, you could, in principle, begin to fiddle. But when I say "additional features", you might ask, "additional" compared to what? Okay here's the magic. The magic is designed by Apple in California®, and it's present on every single Mac running OS X. It's called TextEdit.app. Distraction free? How can you be more distraction free than this: + +![](http://i.imgur.com/z3LEu0U.png) + +It's either text or blank. Nothing else. It's more than capable of handling plain text, our best friend (and computer's best friend — the universal interface). You can customize the font once and for all, or you can even live with the factory setting. That's better than having a font you don't like forced upon you, as many of those focused-writing apps do. You can even auto save to iCloud if you'd like to. Of course there's no one click publishing or timeline management or whatever, but you could leave that to a publishing app (like Desk, when used as a publishing app). Better yet, you can use Jekyll or Octopress or whatever command line solution, where everything is at your fingertip, a few keystrokes away. No limitation whatsoever. But that's out of question for most people. (The easy-to-use command line interface, and not needing to worry about hosting myself, are two of the primary reasons that brought me to Octopress on GitHub Pages, rather than wordpress.com or self-hosted wordpress.org). + +Of course I'm not saying TextEdit is good enough as a text editor (it is good enough for most people, though), or it is my text editor of choice. My text editor has always been Emacs, which can be distraction free when I need it to be (I've hidden everything I feel like to hide), and which can be an almost feature-complete operating system when I need it to be. Apart from a slightly frustrating loading time, there's no such bullshit as "we deliberately left no feature and no choice to you so you won't fiddle." Mostly importantly, it is extensible — I can start writing Elisp right away, and every single line of code I write can potentially save me thousands of keystrokes in the future; I don't need to submit a feature request to the developer and wait forever (usually power users' feature requests are ignored, unless the software was built for power users to begin with or mainly popular within power users). Plus Emacs is free (both as in beer and in speech), rather than being proprietary and costing $30. At any rate, when I'm writing in Emacs, most of the time I'm just furiously typing away — no distraction whatsoever. That's the ideal state of writing, and I feel really good at those moments. That's the main charm of writing, at least to me. + +![](http://i.imgur.com/2Jx9Mpv.png) + +The whole command line experience is awesome (most of what I do with the computer are either done in the browser or in iTerm2 — well, plus some time spent with PDF in Preview.app and some with emails in Mail.app). And most of my tools either ship with the operating system (OS X is a great operating system), or are FOSS. Things that hardly ever die. Of course the command line experience is infeasible for laymen, but my argument is, **most of the time the things you need are already there, e.g., TextEdit.** I feel bad about those folks who are constantly on the lookout for distraction-free writing apps, and pay a ridiculous amount for them — only to distract themselves. Just open TextEdit and type away (or if you're capable of it, Emacs or Vim or SublimeText or TextMate or BBEdit or whatever). That's the most productive thing to do. **Publishing is not the top priority; writing is, and it's really simple.** diff --git a/source/blog/2014-12-05-python-3-and-unicode.md b/source/blog/2014-12-05-python-3-and-unicode.md new file mode 100644 index 00000000..d7d871d8 --- /dev/null +++ b/source/blog/2014-12-05-python-3-and-unicode.md @@ -0,0 +1,8 @@ +--- +layout: post +title: "Python 3 and Unicode" +date: 2014-12-05 15:01:54 -0800 +comments: true +categories: +--- +I never realized that in Python 3 Unicode is the default; in particular, `str` in Python 3 is practically equivalent to `unicode` in Python 2. This might be the *one thing* that convince me to migrate. `str.encode()`, `str.decode()`, `unicode.encode()`, `unicode.decode()`, etc. are so confusing that I'm never 100% sure what I'm doing (only-occasionally-used-but-unavoidable-and-worst-of-all-very-confusing "features" are nightmares). diff --git a/source/blog/2014-12-10-omnifocus-change-sync-behavior-mac-and-ios.md b/source/blog/2014-12-10-omnifocus-change-sync-behavior-mac-and-ios.md new file mode 100644 index 00000000..4ee29d57 --- /dev/null +++ b/source/blog/2014-12-10-omnifocus-change-sync-behavior-mac-and-ios.md @@ -0,0 +1,20 @@ +--- +layout: post +title: "OmniFocus: change sync behavior, Mac and iOS" +date: 2014-12-10 22:45:34 -0800 +comments: true +categories: +--- +On OS X, the following URIs are relevant: + +* <a href="omnifocus:///change-preference?MaximumTimeBetweenSync=30">omnifocus:///change-preference?MaximumTimeBetweenSync=30</a> +* <a href="omnifocus:///change-preference?TimeFromFirstEditToSync=2">omnifocus:///change-preference?TimeFromFirstEditToSync=2</a> + +What they do are self-evident. + +On iOS, use the following URIs instead: + +* <a href="x-omnifocus-debug:set-default:MaximumTimeBetweenSync:60">x-omnifocus-debug:set-default:MaximumTimeBetweenSync:60</a> +* <a href="x-omnifocus-debug:set-default:TimeFromFirstEditToSync:2">x-omnifocus-debug:set-default:TimeFromFirstEditToSync:2</a> + +Source: [Change Default Sync Times of OmniFocus For Mac and iOS](http://www.macstories.net/links/change-default-sync-times-of-omnifocus-for-mac-and-ios/). diff --git a/source/blog/2014-12-13-the-mac-like-evernote.md b/source/blog/2014-12-13-the-mac-like-evernote.md new file mode 100644 index 00000000..7eacfad3 --- /dev/null +++ b/source/blog/2014-12-13-the-mac-like-evernote.md @@ -0,0 +1,16 @@ +--- +layout: post +title: "The Mac-like Evernote" +date: 2014-12-13 21:47:31 -0800 +comments: true +categories: +--- +Once in a while (maybe a year, maybe several months — not set in stone), I give big name free services not in use a chance to convince me. Evernote is one such service. The interface used to look very cheap and cluttered. I hated it. However, this time I'm sold. Now everything Evernote, from its Mac app to its iOS app to its web design to its physical products, looks distinctively Mac-like. (I use Mac-like to refer to Apple's design philosophy, including iOS. Well, I guess the Android and Windows apps aren't Mac-like.) I mean, just look at the screenshots: + +![Web UI, beta](http://i.imgur.com/AZelofm.png) +![Evernote Market, Pfeiffer Collection](http://i.imgur.com/tZuWBlY.png) +![Mac app](http://i.imgur.com/R4QF8OM.png) + +Bright, simplistic, elegant, clutter-free. Mac-like. The Mac app takes advantage of the translucent material of Yosemite, and it looks gorgeous. The iOS app also feels great on a full HD Retina screen; I didn't bother to take a screenshot. Now it's much likely that I'll put it into good use — cluttered and cheap-looking interfaces give me nightmares and actually hinders my productivity, and now they are gone. + +No one can argue that Apple products make great screenshots. They are also much more intuitive, functional, and productive than most Windows folks are willing to believe. I hope our world is more Mac-like. diff --git a/source/blog/2014-12-14-speeding-up-emacs-with-emacsclient.md b/source/blog/2014-12-14-speeding-up-emacs-with-emacsclient.md new file mode 100644 index 00000000..f82f63cf --- /dev/null +++ b/source/blog/2014-12-14-speeding-up-emacs-with-emacsclient.md @@ -0,0 +1,36 @@ +--- +layout: post +title: "Speeding up Emacs with emacsclient" +date: 2014-12-14 10:06:02 -0800 +comments: true +categories: +--- +Emacs is notorious for its loading time. For me, this is especially annoying when I'm editing LaTeX files — AUCTeX takes about five seconds to load, and once I exit Emacs (especially after a quick edit), all that work is wasted, and next time I want to do some quick editing with that same LaTeX file — sorry, another five seconds. + +This problem can be solved by "using that same Emacs", i.e., running Emacs in server mode, then connecting to the server via `emacsclient`. Below is my script, which I call `emc`, to make `emacsclient` more user-friendly. `emc` opens a file (given as `$1`) on the server, launching one on its way if none is detected. Note that I used `-cqta=` for `emacsclient`. The `-c` option is `--create-frame`, i.e., create a new frame (in the current tty, for instance) instead of using the existing frame (in another tty, for instance); this allows for multiple frames accross different ttys. The `-q` option is for `--quiet`, suppressing messages like "Waiting for Emacs..." The `-t` option is for `--tty`, or equivalently, the familiar `-nw` option of `emacs`. The `-a=` options is `--alternate-editor=`; according to the manpage, `-a, --alternate-editor=EDITOR` has the following effect: + +> if the Emacs server is not running, run the specified editor instead. This can also be specified via the \`ALTERNATE_EDITOR' environment variable. If the value of EDITOR is the empty string, run \`emacs --daemon' to start Emacs in daemon mode, and try to connect to it. + +Note that `emacsclient` requires a filename, so my script prompts for one if `$1` is empty. + +``` bash emc +#!/usr/bin/env bash +if [[ -n $1 ]]; then + file=$1 +else + while [[ -z ${file} ]]; do + read -p 'filename: ' file + done +fi +emacsclient -cqta= "${file}" +``` + +Note that using `emacsclient` has the additional benefit that the same buffer is simultaneously updated accross different ttys (See screenshot, where I opened the current post in two different ttys). This way, you won't face the nasty "file changed on disk" problem when you accidentally edited the same file in another tty session. + +![screen shot of multiple copies of the same buffer](http://i.imgur.com/9KxEWKq.png) + +By the way, remember to re-configure your other programs that uses an external editor. For instance, change `$EDITOR` to `emacsclient -cqta=` in your `env`, and `core.editor` to `emacsclient -cqta=` in your `~/.gitconfig`. + +*Note*: if you use `emacsclient` to edit git commit messages in Git Commit Mode, remember to use `C-c C-c` (`git-commit-commit`) to save the commit message rather than using `server-edit` or `C-x C-c` to exit Emacs. Otherwise, the `COMMIT_EDITMSG` buffer will persist in the Emacs server, and you'll be prompted to revert buffer the next time you edit another commit message, which is pretty annoying. + +I just started using `emacsclient`, so the above script might be buggy in certain edge cases. I'll report when I run into issues. diff --git a/source/blog/2014-12-14-the-google-chrome-comic-a-classic.md b/source/blog/2014-12-14-the-google-chrome-comic-a-classic.md new file mode 100644 index 00000000..b1fe01b5 --- /dev/null +++ b/source/blog/2014-12-14-the-google-chrome-comic-a-classic.md @@ -0,0 +1,21 @@ +--- +layout: post +title: "The Google Chrome Comic — A classic" +date: 2014-12-14 17:42:55 -0800 +comments: true +categories: +--- +I was cleaning up my Opera bookmarks just now — I'm semi-officially leaving Opera for Safari. Of course, Safari still can't handle everything (e.g., Adblock Plus is still not so good on Safari, YouTubeCenter lags behind and I don't bother to compile myself — yes, I have a certificate, and some power user features simply don't exist), so I'm still going to Opera/Opera beta/Chrome/Firefox for certain tasks. But Safari is very nice. For the first time. + +I started out as a Chrome user (well, don't want to recall the IE days), branched out to the Chromium Opera, and now ended up in Safari. Not sure about the future. When I look back, something nostalgic pops up in mind — [the Google Chrome Comic](http://www.google.com/googlebooks/chrome/). I enjoyed it more than once, but I never seemed to have archived it. So here it is, combined into [one PDF](https://dl.bintray.com/zmwangx/generic/2008-chrome-comic.pdf). In fact, you can create the PDF yourself: + +``` +seq 0 39 | parallel wget -q http://www.google.com/googlebooks/chrome/images/big/{}.jpg +convert $(ls -v *.jpg) 2008-chrome-comic.pdf +``` + +Here I was a bit lazy and used a GNU `ls` feature: `-v` for natural sorting of numbers (doesn't work for BSD `ls`). + +And here's page 1 of the comic as a teaser: + +![](http://i.imgur.com/W5pJTjl.jpg) diff --git a/source/blog/2014-12-19-app-suggestion-dropzone-3.md b/source/blog/2014-12-19-app-suggestion-dropzone-3.md new file mode 100644 index 00000000..340828a5 --- /dev/null +++ b/source/blog/2014-12-19-app-suggestion-dropzone-3.md @@ -0,0 +1,12 @@ +--- +layout: post +title: "App suggestion: Dropzone 3" +date: 2014-12-19 14:08:57 -0800 +comments: true +categories: +--- +I recently tried and purchased [Dropzone 3](https://aptonic.com/dropzone3/). See a list of features on the linked official website. In short, Dropzone 3 provides an intermediate zone for drag-n-drop. You can use it as a stash (called "Drop Bar" — stacking is available), use it as a shortcut by putting frequently used folders and applications there, or trigger actions by dropping. There are a dozen builtin actions and [an additional list of readily available actions](https://aptonic.com/dropzone3/actions/), covering common web drives, SNS and file sharing sites. **Better yet, you can develop your custom actions with the easy-to-use [Ruby API](https://github.com/aptonic/dropzone3-actions/blob/master/README.md#dzalerttitle-message).** For instance, I wrote a simple Google Translate action, `Google Translate.dzbundle` ([link](https://gist.github.com/zmwangx/b27f106a8ba47468a43d)), based on [translate-shell](https://github.com/soimort/translate-shell). (You know, it's Ruby, so calling external commands and concatenating strings feel at home, as if you are coding in Perl or directly in shell; unlike Python, where you at least need to `import subprocess` then `subprocess.check_output` to get the output of an external command, and have to use a bunch of stupid `+`'s to get your goddamn message to print.) + +Although I use the terminal for most tasks, drag-n-drop is still useful and convenient at times, not to mention the custom actions. (And the stock drag-n-drop is kinda hit-and-miss, especially for people like me who are mostly working with windows maximized — except terminal windows.) After using Dropzone 3 for a while, I found it well worth $4.99. + +Wait, I didn't mention the pricing? Dropzone 3 is only [$4.99 on MAS](https://itunes.apple.com/us/app/dropzone-3/id695406827?ls=1&mt=12), so get it while supplies last. (Somehow the license is $10 on the developer's online store, so definitely buy from MAS and change to the [unsandboxed version](https://aptonic.com/dropzone3/sandboxing.php) later — de-sandboxing is free.) There's also a 15-day free trial. diff --git a/source/blog/2014-12-22-10k-images-on-imgur.md b/source/blog/2014-12-22-10k-images-on-imgur.md new file mode 100644 index 00000000..a9a96bde --- /dev/null +++ b/source/blog/2014-12-22-10k-images-on-imgur.md @@ -0,0 +1,8 @@ +--- +layout: post +title: "10k images on imgur" +date: 2014-12-22 12:42:16 -0800 +comments: true +categories: +--- +I happened to check my imgur account just now (haven't been to the web interface for ages), and you know what, I have uploaded 10,744 images since I created the account in February this year! (I've been using imgur for longer than that; previously I uploaded images anonymously.) Most of the 10k images were uploaded via scripts using the API. This again demonstrates the importance of a good API — without the imgur API I wouldn't have been able to upload hundreds of images with a few keystrokes all in a snap, and getting links would be a huge pain in the ass. There are myriad image hosting services out there, but imgur rules 'em all, thanks to its decent API (and also its good CDN and direct image links, of course). diff --git a/source/blog/2014-12-23-mpv-launcher.md b/source/blog/2014-12-23-mpv-launcher.md new file mode 100644 index 00000000..1bb85b50 --- /dev/null +++ b/source/blog/2014-12-23-mpv-launcher.md @@ -0,0 +1,26 @@ +--- +layout: post +title: "mpv launcher" +date: 2014-12-23 00:51:05 -0800 +comments: true +categories: +--- +**_04/06/2015 update:_** + +I just noticed that `daemonize` doesn't play too well with the OS; in particular, when you use dark menu bar on OS X Yosemite, apps launched with `daemonize` won't conform to that. So a native shell solution would be using `/bin/zsh` and run + + mpv "$@" >/dev/null 2>&1 </dev/null &! + +instead. + +--- + +[`mpv`](http://mpv.io) is a nice simplistic video player (fork of MPlayer and mplayer2). The CLI is flawless (and you can run as many instances as you want), but when it comes to the OS X application bundle, there's one major annoyance. Each app bundle could only have one running instance (unless `open -n`’ed, which is not how sane people use app bundles), and one instance of `mpv` only supports one video. So, say I'm playing one video with the app bundle, and unsuspectingly opens another in Finder (which is associated to `mpv.app` by default), then the latter video immediately takes over, and the position in the first video is lost. That happens *a lot*. + +Today I finally gave this issue some serious thought (I've been on a bug report/enhancement request spree these days so it's natural for me to start thinking about enhancements). Turns out that there's a pretty simple workaround. I created an automator app `mpv-launcher.app` that does one thing: "Run Shell Script" (pass input as arguments) + + daemonize /usr/local/bin/mpv "$@" + +in the shell of your choice (for me the shell of choice is `zsh` since the env would be readily available from my `zshenv`). `daemonize`, as the name suggests, daemonizes the process so that the process doesn't block; this way, `mpv-launcher.app` immediately quits after launching, making multiple "instances" possible. (`daemonize` can be installed via `brew install daemonize`; note that you need to specify the full path of the command to daemonize, which in my case is `/usr/local/bin/mpv`). And there you go. Associate your video files to `mpv-launcher.app`. Launch as many instances as you want. Enjoy. + +By the way, I also filed an [enhancement request](https://github.com/mpv-player/mpv/issues/1377) with `mpv-player/mpv`. We'll see what the developers can do. Hopefully the app bundle will support multiple videos out of box in the future. diff --git a/source/blog/2015-01-01-os-x-system-ruby-encoding-annoyance.md b/source/blog/2015-01-01-os-x-system-ruby-encoding-annoyance.md new file mode 100644 index 00000000..2bef694d --- /dev/null +++ b/source/blog/2015-01-01-os-x-system-ruby-encoding-annoyance.md @@ -0,0 +1,43 @@ +--- +layout: post +title: "OS X system ruby encoding annoyance" +date: 2015-01-01 22:49:39 -0800 +comments: true +categories: +--- +I've been using RVM (with fairly up-to-date Rubies) and pry since my day one with Ruby (well, almost), so it actually surprises me today when I found out by chance how poorly the system Ruby behaves when it comes to encoding. + +The major annoyance with the current system Ruby (2.0.0p481) is that it can't convert `UTF8-MAC` to `UTF-8` (namely, NFD to NFC, as far as I can tell), at least not with Korean characters. Consider the following script: + +```ruby utf8-mac.rb +# coding: utf-8 +require 'hex_string' +str = "에이핑크" +puts str.to_hex_string +puts str.encode("UTF-8", "UTF8-MAC").to_hex_string +``` + +Here are what I get with the system Ruby and the latested brewed Ruby: + +```bash +> /usr/bin/ruby --version +ruby 2.0.0p481 (2014-05-08 revision 45883) [universal.x86_64-darwin14] +> /usr/local/bin/ruby --version +ruby 2.2.0p0 (2014-12-25 revision 49005) [x86_64-darwin14] +> /usr/bin/ruby utf8-mac.rb +e1 84 8b e1 85 a6 e1 84 8b e1 85 b5 e1 84 91 e1 85 b5 e1 86 bc e1 84 8f e1 85 b3 +e1 84 8b e1 85 a6 e1 84 8b e1 85 b5 e1 84 91 e1 85 b5 e1 86 bc e1 84 8f e1 85 b3 +> /usr/local/bin/ruby utf8-mac.rb +e1 84 8b e1 85 a6 e1 84 8b e1 85 b5 e1 84 91 e1 85 b5 e1 86 bc e1 84 8f e1 85 b3 +ec 97 90 ec 9d b4 ed 95 91 ed 81 ac +``` + +As you can see, in the case of the system Ruby, NFD is left untouched. This leads to problems with, for instance, Google Translate. One obvious solution is to outsource the task to `iconv`, but I have the impression that outsourcing language features to shell commands is a generally despised practice. + +There's one more surprise. While `pry` with latest Rubies tend to handle Unicode very well (unlike `irb`), I tried `pry` with the current system Ruby, and it doesn't work; due to this annoying limitation, I couldn't even test the above problem interactively, and had to resort to a script. Maybe the problem can be resolved by compiling Ruby with `readline` or whatever; I didn't bother. The bottom line is, the system Ruby is not very pleasant for men in the 21st century — good Unicode support ought to be a must. (By the way, NFD in HFS+ is maddening. It breaks Terminal, iTerm, Google Translate, scp with Linux hosts, and the list goes on.) + +P.S. In Dropzone 3 custom actions you can select a custom Ruby with the `RubyPath` meta field, e.g., + +```ruby +# RubyPath: /usr/local/bin/ruby +``` diff --git a/source/blog/2015-01-10-fonts-why-chinese-web-design-is-hard.md b/source/blog/2015-01-10-fonts-why-chinese-web-design-is-hard.md new file mode 100644 index 00000000..7d6e5ee8 --- /dev/null +++ b/source/blog/2015-01-10-fonts-why-chinese-web-design-is-hard.md @@ -0,0 +1,16 @@ +--- +layout: post +title: "Fonts: why Chinese web design is hard" +date: 2015-01-10 09:30:02 -0800 +comments: true +categories: +--- +For years I've been complaining about Chinese websites' horrendous designs. Yesterday I tried to translate one of my simple project websites to Chinese, and finally realized that web design for the Chinese language is no simple task — much harder than for English. The problem is fonts. This might not be the only problem (and cannot take blame for all the horrendous designs), but it certainly seems to be a roadblock. + +The problem with fonts boils down to the fact that the Chinese writing system has too many glyphs. I still remember learning things about the GB 2312 charset when I was twelve — there are 3755 Level 1 characters (more commonly seen), 3008 Level 2 characters, and other symbols and foreign glyphs. Designing more than six thousand Chinese characters is so much harder than designing 26 letters. I'm not sure if many glyphs are auto-generated from parts, but that would certainly degrade the quality. The result? Availability of digital fonts suffers. There are simply not so many choices of Chinese fonts. Chinese writing is beautiful, but I've yet to see a font for screens (let alone the web) that conveys that beauty. This might be subjective, but I have the impression that fonts generally look worse on screen than in print, and more so for Chinese fonts (Retina doesn't help much). For the record, I checked Apple's font usage at the moment, and they are using a tailored font named "PingHei" ("平黑", I guess; see screenshot at the end); I'm not at all impressed. Compare that to the English counterpart (also at the end) — not on the same level. (I won't talk about Microsoft since it doesn't feature a design department, or that department is brain dead. Well, I'm a little opinionated.) + +Another problem triggered by the vast number of glyphs is that font files are large. I looked at a dozen OTF fonts with SC or TC glyphs, and none seems to be below 10 MB. That's clearly a no go on the web — not until everyone has a gigabit connection, I suppose. I tried to Google for Chinese webfonts and had little success, so I'm not sure if woff helps. I've heard that Apple is able to pack a reduced set of PingHei glyphs into woffs less than 1 MB (keep in mind that PingHei being sans serif is simpler than serif fonts like Songti); that's pretty remarkable. I don't know much about font technologies so I can't comment more on this matter, but from my observation all Chinese websites (with the exception of apple.com/cn, I guess) rely on locally installed fonts, and most don't even have a list of fallbacks, i.e., typefaces simply aren't part of their designs. Even if they do have a list of fallbacks, they won't be able to guarantee uniform experience across the board (as far as I know, the lowest common denominator of Chinese fonts across all platforms seem to be zero). Apple has taught us that design must be integrated and perfected (well, Apple wasn't the first to do design, but they did bring it to the digital world and to the masses). Any fragmented design is doomed to fail. + +![](http://i.imgur.com/MPmtSJI.png) + +![](http://i.imgur.com/hBpdv0B.png) diff --git a/source/blog/2015-01-21-web-design-microsoft-vs-apple.md b/source/blog/2015-01-21-web-design-microsoft-vs-apple.md new file mode 100644 index 00000000..5f8cb833 --- /dev/null +++ b/source/blog/2015-01-21-web-design-microsoft-vs-apple.md @@ -0,0 +1,19 @@ +--- +layout: post +title: "Web design: Microsoft vs Apple" +date: 2015-01-21 16:30:51 -0800 +comments: true +categories: +--- +I just had a look at Ars's live blog on today's Windows 10 Event to acquire a sense of where Windows is heading. There's not much to report. Safari rip-off (Microsoft's new Spartan — wait, is this name also inspired by Safari? — features reading mode and offline reading list, Safari's killer features) aside, the focus seems to be virtual assistant, PC-tablet-phone integration, and gaming, none of which I'm interested in. The hologram thing does look cool, but putting the hype aside, I doubt if it will be really useful for the masses (except probably in gaming, one of my most despised applications of computing). I'm not a visionary so maybe I'm underestimating this. + +(Another interesting development is "Windows as a Service" — WaaS? Microsoft isn't communiating it effectively. If it means paid subscription, am I ever going to subscribe to an OS? No. If it instead means free system updates for the lifetime of a device, then this WaaS thing is just a vacuous buzz phrase — Apple has already been doing it for two years. Longer if you count the cheap upgrades. However, if free system updates is indeed the case, then what about VMs? Not sure.) + +The only thing I would like to see Apple copy from Microsoft is the unlimited OneDrive — come on, we already paid enough for our hardware, why can't we have unlimited cloud storage? I would even pay $10 per month for that — Microsoft is offering Office 365 along with unlimited cloud storage, all for just $10, so it certainly won't hurt Apple. The current iCloud pricing is ridiculous. + +All the discussions above are not the main point of this post though. The point is, I went to the Windows website to learn more about Windows 10, and just can't believe my eyes in how awful it is designed. Just look at the font and the layout of <http://windows.microsoft.com/en-us/windows-10/about> (full web page screenshot courtesy of [web-capture.net](http://web-capture.net)). And compare that to <http://www.apple.com/osx/> (scroll past the Windows screenshot). Holy crap, I even booted my Windows 8.1 VM just to make sure I'm not lacking the necessary fonts available on Windows. + +Why Microsoft's web design is so shitty is always beyond my grasp. For OS X, a potential customer would be eager to set his hands on it just by looking at its beautifully-crafted homepage and a few screenshots there. For Windows it's exactly the opposite. I mean, apart from metro apps (worst and ugliest desktop experience ever), modern Windows actually looks pretty good. But their shitty advertising totally ruins it. I guess it doesn't matter much for Microsoft, for all design-savvy folks who are not stuck on Windows are already using OS X, and most of their customers just need a commodity OS. + +![](http://i.imgur.com/0eIt4SR.png) +![](http://i.imgur.com/piUO0xY.png) diff --git a/source/blog/2015-02-10-monitor-progress-of-your-unix-pipes-with-pv.md b/source/blog/2015-02-10-monitor-progress-of-your-unix-pipes-with-pv.md new file mode 100644 index 00000000..2f58e884 --- /dev/null +++ b/source/blog/2015-02-10-monitor-progress-of-your-unix-pipes-with-pv.md @@ -0,0 +1,60 @@ +--- +layout: post +title: "Monitor progress of your Unix pipes with pv" +date: 2015-02-10 02:18:30 -0800 +comments: true +categories: +--- +Recently I found a very useful utility called `pv` (for "pipe viewer"). [Here](http://www.ivarch.com/programs/pv.shtml) is its home page, and it can be easily installed with `brew`. According to its man page, + +> `pv` shows the progress of data through a pipeline by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. + +For more info, see its home page (linked above) and [man page](http://linux.die.net/man/1/pv). + +Why is it useful? Well, pretty obvious if you are in the right audience. For me, one particularly important use case is with `openssl sha1`. I deal with videos on a daily basis, and back up all of them to OneDrive (ever since OneDrive went unlimited). To ensure integrity of transfer (in future downloads), I append the first seven digits of each video to its filename. This should be more than enough to reveal any error in transfer except for active attacks. One additional advantage is that I can now have multiple versions of a same show, event, or whatever and don't have to worry about naming conflicts (and don't have to artificially say `-ver1`, `-ver2`, etc.). This little merit turns out to be huge and saves me a lot of trouble, since naming things is intrinsically hard: + +> There are only three hard things concurrency, in computer science: cache invalidation, naming things, and off-by-one errors. + +(I learned this beefed up version of two hard things only recently.) Well, too much digression. So SHA-1 sum is useful. (By the way, I learned in my crypto class that SHA-1 is broken as a collision-resistant hash function — not HMAC, which doesn't assume collision-resistance — and SHA-256 should be used instead. However, I'm not protecting against active attacks — I won't be able to without a shared secret key anyway — so the faster SHA-1 is good for my purpose.) But at the same time, SHA-1 is slow. Maybe what's actually slow is my HDD. Whatever the bottleneck, generating a SHA-1 digest for a 10 GB video file isn't fun at all; it's even more of a torture when there's no progress bar and ETA. But hopelessly waiting has become a thing of the past with the advent (well, discovery in my case) of `pv`. Now I have nice and informative progress bars, which reduces the anxiety of waiting by an order of magnitude. + +For the record, here's the current version of my ruby script that attaches the first seven digits of the SHA-1 digests of the given files to their filenames: + +```ruby 7sha1 +#!/usr/bin/env ruby + +require 'fileutils' + +def rename(items) + num_items = items.length + num_done = 0 + items.each {|path| + printf($stderr, "%d/%d: %s\n", num_done + 1, num_items, File.basename(path)) + + if ! File.directory?(path) + extname = File.extname(path) + basename = File.basename(path, extname) + dirname = File.dirname(path) + sha1sum = `pv '#{path}' | openssl sha1` + new_basename = basename + "__" + sha1sum[0,7] + new_path = File.join(dirname, new_basename + extname) + FileUtils.mv(path, new_path) + else + STDERR.puts("#{path}: directory ignored") + end + + num_done += 1 + } +end + +rename(ARGV) +``` + +You might ask why I used ruby (littered with bash) when it's obviously a job for bash or perl. Well, the reason is that I first wrote this thing in ruby as a [Dropzone 3 action](https://gist.github.com/zmwangx/d6406fb8bf51ac768770). I'm lazy, so I just borrowed that script and modified its printout for shell use. + +--- + +By the way, I also found a project called `cv` (Coreutils Viewer), which is [officially described as](https://github.com/Xfennec/cv) + +> ... a Tiny, Dirty, Linux-Only C command that looks for coreutils basic commands (cp, mv, dd, tar, gzip/gunzip, cat, etc.) currently running on your system and displays the percentage of copied data. + +I'll look into it when I have time, but it from its description, it seems to be limited to coreutils, and OS X support might not be too awesome (at this point). diff --git a/source/blog/2015-02-17-microsoft-is-getting-cool-but-not-its-website.md b/source/blog/2015-02-17-microsoft-is-getting-cool-but-not-its-website.md new file mode 100644 index 00000000..7342568d --- /dev/null +++ b/source/blog/2015-02-17-microsoft-is-getting-cool-but-not-its-website.md @@ -0,0 +1,20 @@ +--- +layout: post +title: "Microsoft is getting cool (but not its website)" +date: 2015-02-17 18:57:19 -0800 +comments: true +categories: +--- +Microsoft is getting kind of cool. For instance, open sourcing .NET last year caused quite a buzz. Ars has a good piece about this: [Microsoft’s continuing efforts to be cool](http://arstechnica.com/information-technology/2015/02/microsofts-continuing-efforts-to-be-cool/). + +Three weeks ago Microsoft made another minor but totally unexpected move: they integrated AgileBits' `onepassword-app-extension` ([GitHub](https://github.com/AgileBits/onepassword-app-extension)) into the 5.0 release of the OneDrive iOS app. I didn't realize this until I read [yesterday's blog post on the OneDrive Blog](https://blog.onedrive.com/onedrive_secure_password/). This is really amazing when you put it in context: I mean, take a look at [Apps that love 1Password](https://blog.agilebits.com/1password-apps/), i.e., apps that have integrated that extension. Out of the ninety apps listed to date, there are only a dozen apps that I've heard of, and the only brands bigger than 1Password are Microsoft, Tumblr, Uber (infamous), and Walmart (what?). Microsoft embracing third party is surely an interesting phenomenon. + +Meanwhile, + +* Microsoft still won't let us use our password managers to its fullest (of course we can't blame it on the OneDrive folks): 16 characters max in this day and age (screenshot taken today)? Hmm. And I remember Microsoft recently said password length isn't the main source of vulnerability of its customers. WTF. Who cares about *your* stupid customers. I just want to protect *my own* data, and make sure that in case of a breach on *your* side, I won't face the same loss as your technologically illiterate customers. But that's not currently possible with Microsoft. + +![](http://i.imgur.com/CNv76zw.png) + +* Microsoft's UI design is still shit, [as well as their website](/blog/2015/01/21/web-design-microsoft-vs-apple/); I mean, seriously: + +![](http://i.imgur.com/wu66zZc.png) diff --git a/source/blog/2015-02-20-my-dock-and-updated-omnifocus.md b/source/blog/2015-02-20-my-dock-and-updated-omnifocus.md new file mode 100644 index 00000000..690b83e1 --- /dev/null +++ b/source/blog/2015-02-20-my-dock-and-updated-omnifocus.md @@ -0,0 +1,21 @@ +--- +layout: post +title: "My dock and updated OmniFocus" +date: 2015-02-20 16:16:10 -0800 +comments: true +categories: +--- + +> Simplicity is the ultimate sophistication. + +Here's a screenshot of my dock at the moment. + +![My dock](http://i.imgur.com/EhaJw57.png "My current dock. Left to right: Finder (TotalFinder), Mail, Safari, Chrome, iTunes, OmniFocus, iTerm2, Activity Monitor, and mpv. Everything except mpv is persistent.") + +Left to right: Finder (TotalFinder), Mail, Safari, Chrome, iTunes, OmniFocus, iTerm2, Activity Monitor, and mpv. Everything except mpv is persistent; mpv is there because I happen to be looping a piece of music with mpv that I don't plan to add to the iTunes library. The point is that it never looked this good, mainly due to the updated OmniFocus icon. Finally they put some serious thought into graphics design! Just compare [the v2.1 icon](https://dl.bintray.com/zmwangx/generic/omnifocus-v2.1.icns) to [the v2.0 version](https://dl.bintray.com/zmwangx/generic/omnifocus-v2.0.icns). + +![](http://i.imgur.com/KeTz5wK.png) + +Obviously the overpolished (and honestly, badly polished) 2.0 one belongs to the past. It "stood out" even among Mavericks dock icons (in terms of color), not to mention among the flattened-down Yosemite ones. Today, it finally becomes a native member of the dock. (Well, actually not today — I've been using the beta for a while, so the new icon didn't come as a surprise.) In fact, this time the Omni Group seems to be on a graphics design streak these days, and today they have a really impressive App Store feature banner: + +![](http://i.imgur.com/tILmveQ.png) diff --git a/source/blog/2015-02-21-all-is-not-lost.md b/source/blog/2015-02-21-all-is-not-lost.md new file mode 100644 index 00000000..75c1ede2 --- /dev/null +++ b/source/blog/2015-02-21-all-is-not-lost.md @@ -0,0 +1,20 @@ +--- +layout: post +title: "All is not lost" +date: 2015-02-21 17:12:32 -0800 +comments: true +categories: +--- +Lubos Motl always attacks the Many-Worlds Interpretation as if it is on the same level as anti-scientific claims. He even went on to attack Hugh Everett (the guy who first formulated this interpretation) personally; *ad hominem* is of course typical Motl shit, and I don't bother to find those posts. Anyway, here's yet another one: [Many worlds: a Rozali-Carroll exchange](http://motls.blogspot.com/2015/02/many-worlds-rozali-carroll-exchange.html). + +Disclaimer: I'm not really a proponent of Many-Worlds, at least not of the part of it that says history really *branches* into *many* worlds. Well, Lubos is at least right about one thing: "many worlds", taken literally, can't even be well-defined. However, I do believe that the world is can be described by a "universal wavefunction" (I prefer to call it the "universal state vector") in some gargantuan Hilbert space. And the universal state vector has to evolve deterministically. The reason is simple: **_all information is not lost_**. This principle is fundamental to physics and it's simply not on the same level as falsifiability, which is little more than a philosopher's toy and a nice thing to have. In quantum mechanics' terms, *unitarity must be respected*; this is why the Copenhagen Interpretation, or at least the wavefunction-collapsing part of it, cannot hold up to serious scrutiny — no operator can ever collapse the wavefunction and break unitarity. Those who hold the Copenhagen Interpretation are confusing *their* lack of knowledge (albeit a fundamental one, as they were entangled into the system when they make an observation) with the fundamental loss of information (which is not possible). + +One may question that if the universal state vector is *real*, then where's all the unavailable information stored (why is there a *fundamental* lack of knowledge)? Well, who told you that all information in this universe can be observed or written down? Everything outside our event horizon is also unavailable to us, yet modern physics knows for sure that some of those *do* exist. Of course we have a hierarchy of belief in the existence of different things, with the universal state vector being hard to believe (and very hard to not believe) or make sense of. But there's no hard cut, and we might some day be able to reason about it. + +I don't know how exactly the observed universe is the way it is (i.e., how exactly it fell — or "collapsed", which is a convenient word for communication — into the eigenstate that we observed). I'm not even sure if the observed universe is the way it is in the objective (ontological) sense — if there were no observers, would it just be an "uncollapsed" state vector? I suspect that this problem has something to do with consciousness, and I suspect that we are at least hundreds of years from understanding consciousness. (Of course this kind of predictions are all nonsense — no one can look thirty years into the future). At the very least, we may eliminate some possibilities when we know more about consciousness. At any rate, this is an interesting problem that might be outside the capability of human reason, or might not. One may hate it and refuse to talk about it, but one cannot dismiss it as unphysical. + +When Lubos dismisses *ontology* as "exactly the same thing" as classical physics, he's dismissing the problem above, and making a hard compromise. He's basically saying that we cannot and should not reason about anything outside of what we can observe (this is also a crude classification because obviously he reasons beyond black hole horizons every day). This compromise is very dangerous for physics — sometimes one has to reason beyond one's horizon to formulate a complete and consistent answer. Black holes are one good example of getting of the limit. If we can extend spacetime beyond our event horizon, then why can't we accept **_the possibility of existence_** outside our "existence horizon", i.e., outside our perceived existence of the universal (and the first hand experience of our own existence inside it)? It's a wild and not well-defined idea, but all new physics starts out not well-defined. + +I still remember the last lecture of my first quantum mechanics course in my freshman year, taught by Prof. Michael Peskin. He discussed the interpretations of quantum mechanics. I forgot the exact arguments, but after rejecting other interpretations (including Copenhagen and hidden variable), he resorted to Many-Worlds, citing "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." I was not particularly satisfied. To me, once you eliminate the impossible, if whatever remains is still improbable, then maybe your imagination is not wild enough. I also remember the second time I took QM I, this time the graduate version, taught by Prof. Lenny Susskind. He stressed unitarity so much and showed us how wavefunction-collapsing is unnecessary (it was never well-defined anyway, unless you impose it). Unitarity is so important that triggered his "black hole war". + +The point of mentioning my two professors is that the interpretation problem of quantum mechanics has never been settled, and people who hold opinions contrary to Copenhagen should be respected. Lubos, on the other hand, tries to convince people that this problem has been settled, and actually settled for ninety years. He is either lying or delusional himself. diff --git a/source/blog/2015-02-24-the-new-onedrive-api.md b/source/blog/2015-02-24-the-new-onedrive-api.md new file mode 100644 index 00000000..28067c36 --- /dev/null +++ b/source/blog/2015-02-24-the-new-onedrive-api.md @@ -0,0 +1,10 @@ +--- +layout: post +title: "The new OneDrive API" +date: 2015-02-24 18:31:19 -0800 +comments: true +categories: +--- +Microsoft released the new OneDrive API today. See the blog post announcement [here](https://blog.onedrive.com/the-new-onedrive-api/). One highlight is that [large file upload](http://onedrive.github.io/items/upload_large_files.htm) is now officially supported. Previously, large file upload was handled with a semi-official API using the BITS protocol; the only documentation was a [gist](https://gist.github.com/rgregg/37ba8929768a62131e85). Now it is handled through standard HTTP `POST`. With this major release, there's likely a lot of work to be done with [python-onedrive](https://github.com/mk-fg/python-onedrive). I have opened an issue: `mk-fg/python-onedrive#52` — [New OneDrive API support](https://github.com/mk-fg/python-onedrive/issues/52). + +Interestingly, the new OneDrive API doc is hosted on GitHub Pages — [onedrive.github.io](http://onedrive.github.io), rather than MSDN. Exactly a week ago I wrote a piece, "[Microsoft is getting cool (but not its website)](http://zmwangx.github.io/blog/2015/02/17/microsoft-is-getting-cool-but-not-its-website/)". Looks like they are doing something about their website (or better put, their online identity), too. diff --git a/source/blog/2015-03-22-back-up-os-x-app-icons.md b/source/blog/2015-03-22-back-up-os-x-app-icons.md new file mode 100644 index 00000000..7f9eec4b --- /dev/null +++ b/source/blog/2015-03-22-back-up-os-x-app-icons.md @@ -0,0 +1,46 @@ +--- +layout: post +title: "Back up OS X app icons" +date: 2015-03-22 16:58:50 -0700 +comments: true +categories: +--- +OS X application icons are valuable assets, and it's interesting to see how they evolve over time. This is especially the case when we upgraded to OS X 10.10 Yosemite, when Apple and many design-aware third party developers overhauled (mainly flattened) their icons. + +However, we lose all the old icons when we do a major OS upgrade. Technically they still live in Time Machine backups, but those are a pain to pull out. Therefore, I wrote a script just now to back up app icons of all applications living in `/Applications` (including those symlinked to `/Applications`, e.g., apps installed through `brew cask`) and its level-one subdirectories, and `/System/Library/CoreServices` (for `Finder.app` and such). Here's the script: + +```bash backup-app-icons +#!/usr/bin/env bash +function app_version +{ + # $1 is the path to the app + /usr/libexec/PlistBuddy -c "print CFBundleShortVersionString" "$1"/Contents/Info.plist 2>/dev/null || date +%Y%m%d +} + +function app_icon_path +{ + # $1 is the path to the app + filename=$(/usr/libexec/PlistBuddy -c "print CFBundleIconFile" "$1"/Contents/Info.plist 2>/dev/null) + [[ -n ${filename} ]] || return + filename=$(basename "${filename}" .icns) + echo "$1/Contents/Resources/${filename}.icns" +} + +function process_app +{ + # $1 is the path to the app + name=$(basename "$1" .app | tr -d ' ') + path=$(realpath -e "$1") || { echo "${RED}error: broken link '${path}'${RESET}" >&2; return 1; } + version=$(app_version "${path}") + icon_path=$(app_icon_path "${path}") + [[ -n ${icon_path} ]] || { echo "${YELLOW}warning: '$1' has no app icon${RESET}"; return 1; } + [[ -f ${icon_path} ]] || { echo "${RED}error: '${icon_path}' does not exist${RESET}" >&2; return 1; } + cp "${icon_path}" "${name}-${version}.icns" + echo "${name}-${version}.icns" +} + +find /Applications -maxdepth 2 -name '*.app' | while read app; do process_app "${app}"; done +find /System/Library/CoreServices -maxdepth 1 -name '*.app' | while read app; do process_app "${app}"; done +``` + +The script is also available as a [gist](https://gist.github.com/zmwangx/fad97e085045a21ebc1d). diff --git a/source/blog/2015-04-26-using-python-3-with-emacs-jedi.md b/source/blog/2015-04-26-using-python-3-with-emacs-jedi.md new file mode 100644 index 00000000..5a6f12ea --- /dev/null +++ b/source/blog/2015-04-26-using-python-3-with-emacs-jedi.md @@ -0,0 +1,31 @@ +--- +layout: post +title: "Using Python 3 with Emacs Jedi" +date: 2015-04-26 21:19:14 -0700 +comments: true +categories: +--- +Recently I'm working on [a hobby project in Python](https://github.com/zmwangx/storyboard), which means editing Python source files a lot. I've been using [Emacs Jedi](https://github.com/tkf/emacs-jedi) for almost as long as I've been writing Python, and it has been pretty helpful at completing away long names. + +However, Jedi uses `python` by default, which means `python2` on most of our systems at this point. Occasionally I'm writing Python 3 specific code but Jedi completes to Python 2 or refuses to complete; for the record, I enjoy writing and debugging Python 3.3+ much better than 2.7 (I realized this after trying to create a code base that is backward compatible with 2.7, which means reinventing the wheel or introducing annoying branches from time to time). So naturally I'm looking into using Python 3 in Jedi. + +The [official docs](https://tkf.github.io/emacs-jedi/latest/#how-to-use-python-3-or-any-other-specific-version-of-python) has been confusing and unhelpful at least for me, since it insists on setting up the virtualenv from within Emacs, and it failed for me. Why can't I set up the virtualenv myself? Turns out I can, and it's incredibly simple. The commands below assume that you have installed Jedi and friends (well, dependencies) using `package.el`. + +```bash +mkdir -p ~/.emacs.d/.python-environments +virtualenv -p /usr/local/bin/python3 ~/.emacs.d/.python-environments/jedi # or whatever your python3 path is +# If you feel like installing the server with 'M-x jedi:install-server', also do the following +~/.emacs.d/.python-environments/jedi/bin/pip install --upgrade ~/.emacs.d/elpa/jedi-20150109.2230/ # you might need to change the version number +``` + +And that's it. Put the following in your `~/.emacs`: + +```emacs-lisp +(add-hook 'python-mode-hook 'jedi:setup) +(setq jedi:complete-on-dot t) +(setq jedi:environment-root "jedi") +``` + +where the first two lines should be there whether you want to use Python 3 or not — so only the third line is new, and its meaning is obvious. + +At last, start Emacs and do `M-x jedi:install-server` if you haven't run the `pip` command above yet. Restart Emacs (if necessary). That's it. Enjoy your Jedi with Python 3. (Type `import conf`, for instance, to be convinced that you're really autocompleting Python 3). diff --git a/source/blog/2015-05-03-why-oh-my-zsh-is-completely-broken.md b/source/blog/2015-05-03-why-oh-my-zsh-is-completely-broken.md new file mode 100644 index 00000000..3d682897 --- /dev/null +++ b/source/blog/2015-05-03-why-oh-my-zsh-is-completely-broken.md @@ -0,0 +1,150 @@ +--- +layout: post +title: "Why Oh My Zsh is completely broken" +date: 2015-05-03 17:15:49 -0700 +comments: true +categories: +--- +Today I moved from [Oh My Zsh](https://github.com/robbyrussell/oh-my-zsh) from [Prezto](https://github.com/sorin-ionescu/prezto), after using Oh My Zsh for about three years since 2012. I'll try to shed some light on the reasons in this post. + +Z shell is a rather complicated shell (compared to Bash), with a hell lot of builtins and a complex completion system. The complexity makes it powerful, but also makes it intimidating to the mortal; moreover, it doesn't look as sweet as it could be out of box. Most mortals, me included, want a interactive shell that's sweet and "just works", so we need wizards to guide us in configuring this beast. Oh My Zsh and Prezto are just two of such configuration frameworks. Oh My Zsh is slightly older: the first commit of Oh My Zsh dates back to [August 2009](https://github.com/robbyrussell/oh-my-zsh/commit/e20401e04e057a39c228dbb99dda68ec7fa4235a), while Prezto was forked from Oh My Zsh in [February 2011](https://github.com/sorin-ionescu/prezto/commit/8d487d4f6c2d38cb108d7c8c0c2de9f0385da402), and has since been completely rewritten. `robbyrussell/oh-my-zsh` as of today has 23,610 stars on GitHub, while `sorin-ionescu/prezto` has 4,069. This doesn't imply Oh My Zsh is any better — I guess the fancy name of Oh My Zsh earned it a lot of undeserved stars; you'll see why soon. + +I was hardly involved in Oh My Zsh development, and I haven't even carefully inspected Oh My Zsh's source code until yesterday, so my soon-to-come complaints about oh-my-zsh might not be completely true. But here it is: **Oh My Zsh brings the worst of community-driven development, where the "community" knows not of what it is doing, and just wants to get things done in the sloppiest way possible.** Let's look at some examples. All discussions are based on [`1400346`](https://github.com/robbyrussell/oh-my-zsh/commit/140034605edd0f72c548685d39e49687a44c1b23), the latest commit at the time of writing. + +## The core lib hodgepodge + +First, look at Oh My Zsh's core [lib](https://github.com/robbyrussell/oh-my-zsh/tree/140034605edd0f72c548685d39e49687a44c1b23/lib): + +``` +bzr.zsh directories.zsh grep.zsh misc.zsh spectrum.zsh +completion.zsh functions.zsh history.zsh nvm.zsh termsupport.zsh +correction.zsh git.zsh key-bindings.zsh prompt_info_functions.zsh theme-and-appearance.zsh +``` + +Wait, why do I see `bzr.zsh`, `git.zsh`, and even `nvm.zsh` in the core lib? The answer is that they are here because they define functions (`bzr_prompt_info`, `git_prompt_info`, `nvm_prompt_info`, etc.) that are called from many themes. But why are all of these mandatory (all files in `lib` are sourced from `oh-my-zsh.sh`)? Why should I load `bzr.sh` and `nvm.zsh` when I don't use Bazaar and NVM at all (well not really, I use [git-remote-bzr](https://github.com/felipec/git-remote-bzr) when I have to clone a Bazaar repo)? And since we already have `bzr.sh`, `git.zsh` and `nvm.zsh` in the core library, why don't we also have `hg.zsh`, `rvm.zsh`, `svn.zsh`, `virtualenv.zsh`, just to name a few? Apparently, Oh My Zsh put these marginal things in the core because there's no easy way to load a plugin from a theme, except an ugly `source` with the full path of the plugin, which is also how plugins are loaded in `oh-my-zsh.sh`. + +Meanwhile, Prezto does it right. Prezto is highly modular, with the `pmodload` function defined in [`init.zsh`](https://github.com/sorin-ionescu/prezto/blob/08676a273eba1781ddcb63c4f89cfff9bd62eac4/init.zsh) to load modules. That's about the entirety of Prezto's core; everything else are in optional [modules](https://github.com/sorin-ionescu/prezto/blob/08676a273eba1781ddcb63c4f89cfff9bd62eac4/modules), including essential configs like `editor` (ZLE configs), `completion`, and `prompt`. Note that module loading order matters in some cases, but still, working with Prezto's modular structure is a joy. Apart from `init.zsh` and `modules/`, Prezto repo does contain a [`runcoms`](https://github.com/sorin-ionescu/prezto/tree/08676a273eba1781ddcb63c4f89cfff9bd62eac4/runcoms) directory with the rc files, but those are just recommendations that one may disregard. In fact, there are a total of eight lines related to Prezto in my `.zshrc`, and nowhere else (note that I only switched to Prezto today, so this freshly baked `.zshrc` is subject to change): + +```sh Excerpt of .zshrc +# prezto +zstyle ':prezto:*:*' color 'yes' +zstyle ':prezto:environment:termcap' color 'no' # disable coloring of less, which is insanely ugly +zstyle ':prezto:load' pmodule environment editor history directory utility colors spectrum git completion prompt ruby +zstyle ':prezto:module:editor' key-bindings 'emacs' +zstyle ':prezto:module:prompt' theme 'zmwangx' +[[ "$OSTYPE" == darwin* ]] && export BROWSER='open' +source ~/.zprezto/init.zsh +``` + +Here `zmwangx` is my [personal theme](https://github.com/zmwangx/prezto/blob/master/modules/prompt/functions/prompt_zmwangx_setup) that looks like [this](https://i.imgur.com/nCBK8ZB.png). + +## Incredibly poor code quality + +Oh My Zsh's code quality is incredibly poor. Even within the core library. Pick any file from `lib/`, and you'll be amazed by the hot mess in front of your eyes. There's no coding standard whatsoever: + +* You can find four-space indents and two-space indents mixed [in the same file](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/functions.zsh); +* You can find function definitions with the `function` keyword and without [in the same file](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/git.zsh); +* You can find [167-character-long lines](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/functions.zsh#L2) mixed with early-broken lines (yes, sometimes [within the same file](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/git.zsh#L69)); +* You can find completely commented out blocks of code [in the core lib](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/key-bindings.zsh#L70-L87), where the average user is not supposed to touch; + +I guess the list could go on; I didn't spend more time inspecting this crap. + +We were discussing styles, but obviously style isn't the only problem with this code base. Next onto a case study of how Oh My Zsh does something in the most inefficient way possible. Let's have a look at [the `git.zsh` file](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/git.zsh). It suffers from almost all problems we have talked about so far, but let's focus specifically on [the `git_prompt_status` function](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/git.zsh#L78-L122): + +```sh +git_prompt_status() { + INDEX=$(command git status --porcelain -b 2> /dev/null) + STATUS="" + if $(echo "$INDEX" | command grep -E '^\?\? ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_UNTRACKED$STATUS" + fi + if $(echo "$INDEX" | grep '^A ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_ADDED$STATUS" + elif $(echo "$INDEX" | grep '^M ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_ADDED$STATUS" + fi + if $(echo "$INDEX" | grep '^ M ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_MODIFIED$STATUS" + elif $(echo "$INDEX" | grep '^AM ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_MODIFIED$STATUS" + elif $(echo "$INDEX" | grep '^ T ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_MODIFIED$STATUS" + fi + if $(echo "$INDEX" | grep '^R ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_RENAMED$STATUS" + fi + if $(echo "$INDEX" | grep '^ D ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_DELETED$STATUS" + elif $(echo "$INDEX" | grep '^D ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_DELETED$STATUS" + elif $(echo "$INDEX" | grep '^AD ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_DELETED$STATUS" + fi + if $(command git rev-parse --verify refs/stash >/dev/null 2>&1); then + STATUS="$ZSH_THEME_GIT_PROMPT_STASHED$STATUS" + fi + if $(echo "$INDEX" | grep '^UU ' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_UNMERGED$STATUS" + fi + if $(echo "$INDEX" | grep '^## .*ahead' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_AHEAD$STATUS" + fi + if $(echo "$INDEX" | grep '^## .*behind' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_BEHIND$STATUS" + fi + if $(echo "$INDEX" | grep '^## .*diverged' &> /dev/null); then + STATUS="$ZSH_THEME_GIT_PROMPT_DIVERGED$STATUS" + fi + echo $STATUS +} +``` + +**This one single function intended to be invoked from a precmd hook (basically excuted every time the prompt is printed), calls `grep` a staggering 14 times inside command substitutions, forking the process 28 times — while all the greps can be replaced with pattern/regex matching right within the shell.** Keep in mind that forking is the most expensive operation of the shell. For instance, `$(echo "$INDEX" | grep '^A ' &> /dev/null)` may well be replaced with + +```sh +[[ $INDEX == *$'\nA '* ]] +``` + +or + +```sh +[[ $INDEX =~ $'\nA ' ]] +``` + +(Note that the `git status --porcelain -b` call always prints the branch info such as `## master...origin/master` in the first line, so `A ` — that is, A followed by two spaces, if present at the beginning of any line, must be preceded by a newline; that's why the above works.) All other grep calls can be similarly replaced with pattern/regex matching. No forking. + +By the way, whoever wrote this function seems to be unaware of the `-q,--quite,--silent` switch of `grep` (which should be available in all implementations), and every call is littered with `&> /dev/null`. In fact, using the `-q` switch is even (slightly) faster: a reasonable implementation of `-q` exits immediately when a match is found, while what is written here waits until all input is processed. + +I haven't exhausted the problems with this function just yet. As a bonus: despite being awfully inefficient, **this function *can't even be used* in many cases for which it is designed.** You might have noticed that the order of different status bits is completely fixed by whoever wrote this function (by the way, all those `$ZSH_THEME_GIT_PROMPT_*` variables are documented nowhere, so one who wants to write a theme has to dig into the source — only to find the function useless expect for polluting the namespace). If one wants to use a different order, or put some of the bits in `RPROMPT`, one has to roll his own (or good luck parsing its output). In fact, even a [dumbed down function `git_prompt_info`](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/lib/git.zsh#L2-L8), which only prints the branch name and whether it's dirty, is similarly uncustomizable; [the `gallois` theme](https://github.com/robbyrussell/oh-my-zsh/blob/140034605edd0f72c548685d39e49687a44c1b23/themes/gallois.zsh-theme), my first theme and on which I later based my own theme, needs to define a `git_custom_status` function to achieve what it needs — otherwise something as simple as adding a pair of brackets around the branch name is super painful. + +**One might wonder how Prezto solves the same problem. The answer is in [the file `modules/git/functions/git-info`](https://github.com/sorin-ionescu/prezto/blob/08676a273eba1781ddcb63c4f89cfff9bd62eac4/modules/git/functions/git-info). The `git-info` function does more, and again in a highly modular way (without grep calls, for God's sake): status bits or their combinations are [formatted on demand with `zformat` and stored in an associative array `git_info`](https://github.com/sorin-ionescu/prezto/blob/08676a273eba1781ddcb63c4f89cfff9bd62eac4/modules/git/functions/git-info#L393-L416), where users specify format strings via `zstyle` with [thoroughly documented escape sequences](https://github.com/sorin-ionescu/prezto/tree/08676a273eba1781ddcb63c4f89cfff9bd62eac4/modules/git#theming). Very beautiful solution.** + +## The completely broken community contribution process + +I'm not sure if the project maintainers are Zsh wizards (I'm afraid not). I'll just assume that most of the code with incredibly poor quality comes from community contribution. Okay, community. But even the community contribution process is completely broken. + +At the time of writing there are 159 open issues and 446 open pull requests in `robbyrussell/oh-my-zsh` (the stats are 13/35 in `sorin-ionescu/prezto` — not proportional to the stars or forks). There's even [a PR called "Easy-to-Merge"](https://github.com/robbyrussell/oh-my-zsh/pull/3809) that is said to collect PRs that are either extremely simple fixes or have been discussed–tested–and–signed-off (wait, then why aren't they already merged?). This makes it almost impossible to open new, substantial PRs (like fixing the `git_prompt_status` above) — God knows whether other people have already proposed the same fix, or a different fix for the same problem, whether it's been discussed–tested–and–signed-off, and how much discussion will be needed for a new PR. + +You might infer from the above that the actually merged PRs are discussed–tested–and–signed-off. Well, of course not (think about the code quality), and here's one more case study. + +The only time I [submitted a PR](https://github.com/robbyrussell/oh-my-zsh/pull/3591) is when [a previous PR](https://github.com/robbyrussell/oh-my-zsh/pull/3564) broke aliases of the `ls` family, which most of us run tens to hundreds of times every day. The `-h` option was stripped from all aliases but one (which was ridiculous since the option seemed to be lost during copy/paste), and anyone who used the affected aliases regularly and lived with the PR for ten minutes should notice. Apparently nobody looked at the diffs before merging, or nobody cared (before I and one other guy jumped in). My PR was merged three days later; the delay was okay. + +[In another instance](https://github.com/robbyrussell/oh-my-zsh/pull/3341), the delay was totally unbearable. [grep 2.21](https://savannah.gnu.org/forum/forum.php?forum_id=8152) was released on November 23, 2014, and it deprecated `GREP_OPTIONS`. Oh My Zsh was using `GREP_OPTIONS` back then, so anyone who upgraded to grep 2.21 and used grep regularly was getting a lot of deprecation warnings (oh, before you ask, `grep.zsh` is the core lib). Core lib stuff spitting deprecation warnings on all platforms all the time is a pretty big thing, right? There were multiple ways to fix this problem, and all of them were trivial to anyone with a reasonable amount of knowledge of Zsh. However, there were quite a bit of discussions spanning multiple issues and PRs (most notably [this one](https://github.com/robbyrussell/oh-my-zsh/pull/3341)). And despite all the discussions, not a single maintainer or collaborator joined or showed any interest, and [a fix was merged not until December 14, 2014](https://github.com/robbyrussell/oh-my-zsh/pull/3403). Of course there are temporary fixes (remember, the issue was trivial to begin with), but the problem must be confusing for the less-proficient Zsh users during the twenty day window. + +## Easter egg + +One more thing, among countless other problems: the recommended way to install Oh My Zsh is either + + curl -L https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh | sh + +or + + wget https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O - | sh + +Cool, huh? How many of you have the `--no-check-certificate` option of `wget` automatically turned on? Thankfully there's no `sudo` in front. + +## Summary + +Oh My Zsh was a great idea when it took off. Over the years however, through low-quality community contributions from people who barely understand Zsh (and the right approaches to shell scripting), it evolved into a beast that no one except the maintainers could seriously contribute to; yet the maintainers seem to be pretty satisfied with it. + +Therefore, I'm moving to Prezto, the project with far better modularity and code quality. In fact, this rant all began from yesterday, when I was about to embark on a stripped down Zsh configuration system for myself. I was thinking about borrowing code from both Oh My Zsh and Prezto; but after reading some code from both projects, I soon realized that Oh My Zsh is totally crap and Prezto can be taken almost unmodified. I hope that more people will take a look at Prezto, realize that it's infinitely better than the famed Oh My Zsh, fork it, and possibly submit patches. diff --git a/source/blog/archives/index.html b/source/blog/archives/index.html deleted file mode 100644 index f1d9cee3..00000000 --- a/source/blog/archives/index.html +++ /dev/null @@ -1,18 +0,0 @@ ---- -layout: page -title: Blog Archive -footer: false ---- - -<div id="blog-archives"> -{% for post in site.posts reverse %} -{% capture this_year %}{{ post.date | date: "%Y" }}{% endcapture %} -{% unless year == this_year %} - {% assign year = this_year %} - <h2>{{ year }}</h2> -{% endunless %} -<article> - {% include archive_post.html %} -</article> -{% endfor %} -</div> |