Harry Mangalam

From InterSciWiki
Jump to: navigation, search

Nancy Wilkins-Diehr (949) 824-0084

  • harry.mangalam@uci.edu

April 19, 2017

  • Hi Doug,
  • We're using the Torque scheduler at UCI; XSEDE is using Slurm. I think it's a simple matter to switch Galaxy from using one or the other. I'll talk with Harry about setting up an appropriate account and test switching to UCI's HPC cluster.

Sincerely, -Francisco

  • CPU_Hours__Total__by_User_2014-03-01_to_2017-03-31_timeseries.png -- Nancy -- The gateway is already hosted at UCI. We were talking about moving the computing (in the low thousands of CPU hours per year) to UCI as well. Nancy
  • From hjmangalam@gmail.com Harry -- Harry Mangalam... able to do all the computing at UCI, which would save you from having to renew Comet in the future. Glad you are good to go for now, but he may be able to provide some assistance in the future that will make operation of the gateway easier for you.
  • Yes, that's the idea. Pls contact lopez@uci.edu to discuss the transition.

On Wed, Apr 26, 2017, 8:37 AM Doug White <douglas.white@uci.edu> wrote:

   Dear Harry,
   can we switch this to do the computing at UCI?  thanks,
   Doug
   -------- Forwarded Message --------
   Subject: 	Re: recuperating http://socscicompute.ss.uci.edu
   Date: 	Wed, 26 Apr 2017 01:13:51 +0000
   From: 	Wilkins-Diehr, Nancy <wilkinsn@sdsc.edu>
   To: 	Doug White <douglas.white@uci.edu>
   Hi Doug,
    

Good to hear. You’ll also want to check in with Harry Mangalam. He is thinking you may be able to do all the computing at UCI, which would save you from having to renew Comet in the future. Glad you are good to go for now, but he may be able to provide some assistance in the future that will make operation of the gateway easier for you.

   Nancy

April 18, 2017

  • I spoke to Nancy just a few minutes ago and we think we might be able to re-host the whole thing at UCI using our HPC cluster as the compute engine.
  • As long as the the hardware requirements aren't overly heavy and instantaneous response isn't required, we can probably do everything locally.
  • Could I speak to you to understand how the analyses are forwarded to SDSC and what SDSC-side scripts or applications are needed (and wher*e they're hosted at SDSC?)
  • Also, what the burstiness of the load is expected to be? ie if a class (what's the max size?) all starts hammering the site, what kind of load are we talking about for a maximum-run analysis? 1m, 1hr, 1day, 1 week?
  • Please let me know when would be a good time to call.
  • Doug, if you know the answers of any of these quests, feel free to weigh in.
  • Best wishes
  • Harry

Harry Mangalam, Jonathan

Hi Doug,  Jonathan Nilsson
  • The VM farm that Harry is referring to is a great place to have a server. It is a very reliable virtual machine infrastructure similar to AWS = Amazon Web Services, but hosted exclusively on-campus in UCI OIT's data center.
  • I'm not quite sure what it is you would like to setup... you mention "DEf01f" which is one of the links on the left side bar of your socscicompute.ss.uci.edu galaxy instance. But I'm not sure what you mean by "cross-cultural analyses". If you have a new tool or workflow that you'd like setup on your socscicompute.ss.uci.edu server, then you'd have to contract with Francisco to get that done.
Best,
Jonathan (Jonathan Nilsson)
On Wed, Jan 4, 2017 at 3:16 PM, Douglas White <douglas.white@uci.edu> wrote:
   Hi Jon,
  • What can you tell me about UCI's Virtual Machine UCI VM farms as a place to set up options for cross-cultural analyses something like the beginnng of *http://socscicompute.ss.uci.edu/
   DEf01f Dow Eff - Analyze data - DEf01f  thanks
   Hi Doug,
  • Got your voicemail. Very glad to hear about your cancer being in remission.
  • The 'location' of your server is that it's a Virtual Machine on one of UCI's VM farms.

*Hi Doug,

  • Got your voicemail. Very glad to hear about your cancer being in remission. Harry Mangalam <hjm@tacgi.com>
  • The 'location' of your server is that it's a Virtual Machine on one of UCI's VM farms.
  • Francisco Lopez <lopez@uci.edu> 1 949 824-8818

can give you more information should you need it re: size/OS/physical location of the VM farm should you need it.

  • hjm Harry Mangalam UCI 949 285-4487
  1. How to call FORTRAN from Python
  2. Research Computing Forum for sharing HOWTOs, favorite apps, utils, approaches -- ToDo
  3. for GIS users: a starter collection of URLs
  4. An example of Perl script using SQLite
  5. AWS

moo.nac.uci.edu/~hjm/Mangalam_2012.html May 23, 2012 - Contact: Harry Mangalam <hjm@tacgi.com> Address: 1 Whistler Ct, Irvine, CA, 92617. Phone: 949 856-2899(home do not use)

  • UCI 949 285-4487(c) voice mail Harry

Jonathan Nilsson

I'll have to talk to our team to see how this works into our abilities going forward. Since you were offline for a time, we have picked up significantly more work and responsibilities with no more ppl.

Most of the RCS group is away today, so I'll talk to them on Monday and get back to you.

My initial impulse is to recommend Amazon. hjm

see Monday 9th 2016 Mangalam below

http://moo.nac.uci.edu/~hjm/CoSSci-Galaxy-Scope-of-Work.html Includes:

  • Scope of Work: CoSSci/Galaxy
  • by Harry Mangalam <harry.mangalam@uci.edu>
  • version 1.2 - Jan 20th, 2016

On Thursday, May 05, 2016 04:26:01 PM Doug White wrote:

 and get things straitened out now that my health seems to betting better
 we got CoSSci back working quickly for Prof Ren Feng's class... that
 could be be made more regular
 We need a discussion of Paul's BNlearn
 my phone is 858 774 3377 Skype a possibity for 3-way
 is there a needed or easy role for AWS for CoSSci or simply leave it
 with CoSSci at Comet?
 
 Should we contact the SDSC representative who a week or two ago was
 discussing access for UCI to SDSC.... ?

see Monday 9th 2016 Mangalam below

http://moo.nac.uci.edu/~hjm/CoSSci-Galaxy-Scope-of-Work.html

Hi Doug,

Apologies for the delays. Francisco Lopez, who helped you with the setup of your system at OIT as a VM image has joined RCS and will be helping you to transition your system to Amazon, since that seems to be the best match to your requirements. Altho Stu indicated in some comments that you were not interested in an Amazon version, we think that makes the most sense in terms of scalability and response time for changing it to handle large classes.

[For Francisco] I wrote a document that I think covered most of the what Doug wants, but recently email comments have suggested that Amazon would be a better fit for his requirements. See here:

http://moo.nac.uci.edu/~hjm/CoSSci-Galaxy-Scope-of-Work.html

Francisco can help you move the VM image to Amazon and provide guidance on how to spin up a larger instance with a few button clicks in the Amazon dashboard, or even to do it by running a script. The larger instances would be to have enough resources to run a class or to host more computationally intensive analyses. This will require that you start an account with Amazon that will be billed for your usage.

If this is acceptable, I'll turn this over to Francisco; if not, let me know what additional things you want.

Best

harry

Nancy Feb 2017

Nancy Wilkins-Diehr: Nancy <wilkinsn@sdsc.edu> To: Doug White <douglas.white@uci.edu>, Jon Nilsson <jnilsson@uci.edu>

Right, I’m pretty sure that the CoSSci server moved awhile ago from the XSEDE gateway hosting system (Quarry, which was actually physically at Indiana U) to something maintained by Harry at UCI. I’m fairly certain that’s the case, but Harry would know.

You continue to request supercomputing time, Doug, for a place to run the calculations, but not to host the server.