Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find the Ruby solution more elegant: https://github.com/bruceadams/pmap

e.g.: Replace:

    require 'open-uri'
    (1..100).map { |i| open("http://httpbin.org/get?a=#{i}").read }
To:

   require 'open-uri'
   require 'pmap'
   (1..100).pmap { |i| open("http://httpbin.org/get?a=#{i}").read }
From 17.55s to 0.72s.


Parallel map is built in to Python already with pools.

    import requests
    import multiprocessing as mp
    
    pool = mp.Pool()
    urls = ["http://httpbin.org/get?a=#{}".format(i) for i in range(100)]
    pool.map(requests.get, urls)


As a non-rubyist, this took me a minute to guess through. I prefer list comps.

hint: I'm implying familiarity over elegance.


I guess in Python it would be going from:

    list(map(lambda x: x**2, [1, 2, 3, 4]))
To:

   list(pmap(lambda x: x**2, [1, 2, 3, 4]))


Right, and neither of those are pythonic. There is no need for a lambda when you have comprehensions.

This comment may relate better to your point: I would personally prefer "afor" over "async for", FWIW. One thing I don't feel "async for" is being more explicit over being more verbose. The symmetry with "await foo" is broken though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: