[SLURM] Re: Acquiring GPU resources on peanut job cluster

Phil Kauffman kauffman at cs.uchicago.edu
Sun Jun 9 16:05:05 CDT 2019


You need to wait in line for other jobs to finish. 

If any one person is taking up all of the GPUS you can request them to cancel a job or two on this list or in person. So far this system seems to be working out ok and I’d imagine it’s better than everyone having a quota.

As of right now the user ‘ady’ is using all the GPUS on the titan partition. You can use the other partitions that have GPUS available.

Cheers,

Phil

-- 
Phil Kauffman
Systems Admin
Dept. of Computer Science
University of Chicago
kauffman at cs.uchicago.edu
773-702-3913 

> On Jun 9, 2019, at 3:43 PM, Ahsan Pervaiz <ahsanp at uchicago.edu> wrote:
> 
> I've been trying to run some experiments on the peanut job cluster.
> 
> While I can run bash on the titan node by running
> 
> `srun -p titan --pty /bin/bash`
> 
> I believe that I do not have access to the GPUs on the machine.
> 
> when I run
> 
> `srun -p titan --pty --gres=gpu:1 /bin/bash`
> 
> I get the following message
> 
> srun: job 117982 queued and waiting for resources
> 
> and the prompt never returns.
> 
> Is there something that I have to do before I can acquire a GPU?
> 
> Thank you,
> Ahsan
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://mailman.cs.uchicago.edu/pipermail/slurm/attachments/20190609/d2b32564/attachment.html>
> _______________________________________________
> Slurm mailing list
> Slurm at mailman.cs.uchicago.edu
> https://mailman.cs.uchicago.edu/cgi-bin/mailman/listinfo/slurm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/slurm/attachments/20190609/bd46f30b/attachment.html>


More information about the Slurm mailing list